Compare commits
129 Commits
main
..
1bd9bd62c5
| Author | SHA1 | Date | |
|---|---|---|---|
| 1bd9bd62c5 | |||
| 87bb6302ff | |||
| c6a31cb710 | |||
| b7528027ac | |||
| f96fe77aed | |||
| 26ea2e588d | |||
| e07f7ff130 | |||
| 54a539c51a | |||
| 4f8497ff26 | |||
| 9975d60a91 | |||
| 3d5db06343 | |||
| 3e32aa958c | |||
| ea3bfeee9e | |||
| fd883a4821 | |||
| 137524b176 | |||
| d6880d5956 | |||
| efe1428f0d | |||
| a3d0e39ba8 | |||
| 97c039459c | |||
| 00905575fc | |||
| d96910092b | |||
| 0813b927b0 | |||
| 7856dcb9a3 | |||
| 4373a34711 | |||
| 98b85628de | |||
| be40b2accd | |||
| 54fa5845af | |||
| 06501eccc9 | |||
| bcbbd4765d | |||
| acbfb03b10 | |||
| 51f86f214a | |||
| a0a0e68c49 | |||
| d5ca894d55 | |||
| fc85314e9b | |||
| 6e70b38524 | |||
| 5a1dbc04e1 | |||
| 0e34ea79c6 | |||
| a74d055993 | |||
| 8828471c1b | |||
| 404d874771 | |||
| 1cb0962fac | |||
| 7d10409ad1 | |||
| 98a85154cb | |||
| 8235de23ec | |||
| 150268b11d | |||
| b268f8a372 | |||
| 0b16de5db8 | |||
| 65fed3a1a2 | |||
| 51bdc4e59a | |||
| 68b63baeac | |||
| 8329ca25de | |||
| bc157a990c | |||
| 9440611f1e | |||
| 943865958d | |||
| 0fad67513f | |||
| 2f5b5fcf6b | |||
| 3061b54059 | |||
| ffed88189f | |||
| 9893f2ed51 | |||
| 2aa4934288 | |||
| feb34ad638 | |||
| 59821d6c16 | |||
| 38eb2e2d37 | |||
| 860e57be02 | |||
| a78aa4c75e | |||
| cc68ff0545 | |||
| 638db44aa5 | |||
| b4660254cd | |||
| c706ac893c | |||
| 3072128d31 | |||
| 53480ec39b | |||
| 0e90a5aca1 | |||
| 8c28fbcaa6 | |||
| 32ab9d3e2d | |||
| d61346b3e9 | |||
| d689504eec | |||
| 815c161544 | |||
| c83a2cd28f | |||
| 118ca1d14e | |||
| fa250e2ae3 | |||
| 1f5fe13805 | |||
| 5c9c605184 | |||
| 881eca6181 | |||
| 4348e353a2 | |||
| d1df2a4cf6 | |||
| c09703053b | |||
| e4e82ef1c6 | |||
| 3e0a6b223d | |||
| 12f6baefad | |||
| 8a605dfb9c | |||
| 2a32b84849 | |||
| 65b2a5da0b | |||
| 950b88a2ea | |||
| 67fa4e1e6d | |||
| de2c2ab21b | |||
| 5769d4a5af | |||
| c4950ed029 | |||
| b5df984498 | |||
| 8e25d5de2a | |||
| 27212b1009 | |||
| b5d6894865 | |||
| cba3f8d2d9 | |||
| 2b9e754ff5 | |||
| fd8b367d52 | |||
| 59b38bb58a | |||
| 3b7bee233e | |||
| 8f13b0b2bf | |||
| 9f3cb6f6d9 | |||
| b199ddc8ac | |||
| 7b9bd44ae5 | |||
| 940f9964f7 | |||
| 8f6c1703ac | |||
| d81254deb2 | |||
| 9898920ab2 | |||
| 2042178cbe | |||
| 555b285d36 | |||
| 79dfa66247 | |||
| 1c6bdddbdc | |||
| 43e8325890 | |||
| d4121bcbc0 | |||
| 506044b9fb | |||
| 56bd92d1f1 | |||
| 221d7170ce | |||
| b3418d48de | |||
| 5113aef8c2 | |||
| 255b2bd4ef | |||
| 4a593f2016 | |||
| 6b58e71c6c | |||
| 5467e4f4bc |
@@ -22,11 +22,11 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
ref: publish
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v5
|
||||
uses: actions/setup-python@v3
|
||||
with:
|
||||
python-version: '3.10'
|
||||
- name: Install dependencies
|
||||
|
||||
-31
@@ -130,34 +130,3 @@ dmypy.json
|
||||
|
||||
#clients
|
||||
*sync_client*
|
||||
|
||||
#App
|
||||
connpy-completion-helper
|
||||
|
||||
# Gemini & AI Tools
|
||||
.gemini/
|
||||
GEMINI.md
|
||||
|
||||
# Node.js (used by Gemini CLI or plugins)
|
||||
node_modules/
|
||||
package-lock.json
|
||||
package.json
|
||||
|
||||
# Development docs
|
||||
connpy_roadmap.md
|
||||
testall/
|
||||
testremote/
|
||||
*.db
|
||||
*.patch
|
||||
scratch.py
|
||||
|
||||
# Internal planning and implementation docs
|
||||
PLAN_CAPA_SERVICIOS.md
|
||||
implementation_plan.md
|
||||
remote-plugin-implementation-plan.md
|
||||
NETWORK_COMMAND_CENTER_PLAN.md
|
||||
ssm_implemmetaiton_plan.md
|
||||
async_interact_plan.md
|
||||
repo_consolidado_limpio.md
|
||||
connpy_roadmap.md
|
||||
MULTI_USER_PLAN.md
|
||||
|
||||
@@ -1,47 +0,0 @@
|
||||
# Privacy Policy
|
||||
|
||||
## Introduction
|
||||
|
||||
Welcome to Connpy ("we", "our", "us"). Connpy is committed to protecting your privacy. This Privacy Policy explains how we collect, use, disclose, and safeguard your information when you use our app, which utilizes Google Login to manage its own files in your Google Drive. Please read this privacy policy carefully.
|
||||
|
||||
## Information We Collect
|
||||
|
||||
### Personal Information
|
||||
|
||||
When you use Connpy, we may collect the following information:
|
||||
- **Google Account Information**: Your email address and basic profile information provided by Google during the login process.
|
||||
|
||||
### App-Specific Google Drive Files
|
||||
|
||||
Connpy requests access only to the files it creates and manages within your Google Drive. We do not access, read, or manipulate any other files in your Google Drive.
|
||||
|
||||
## How We Use Your Information
|
||||
|
||||
We use the information we collect in the following ways:
|
||||
- **Authentication**: To log you into the app using your Google account.
|
||||
- **File Management**: To upload, manage, and organize the files that Connpy creates in your Google Drive.
|
||||
|
||||
## Sharing Your Information
|
||||
|
||||
We do not share your personal information or any data related to your Google Drive files with third parties, except in the following cases:
|
||||
- **Legal Obligations**: If required by law, we may disclose your information to comply with legal processes.
|
||||
|
||||
## Data Security
|
||||
|
||||
We implement appropriate technical and organizational measures to protect your personal information and the files managed by Connpy from unauthorized access, disclosure, alteration, or destruction.
|
||||
|
||||
## Your Rights
|
||||
|
||||
You have the following rights regarding your information:
|
||||
- **Access and Update**: You can access and update your profile information through your Google account settings.
|
||||
- **Revoke Access**: You can revoke Connpy's access to your Google Drive at any time via your Google account permissions settings.
|
||||
- **Delete Data**: You can delete the files created by Connpy in your Google Drive at any time.
|
||||
|
||||
## Changes to This Privacy Policy
|
||||
|
||||
We may update this Privacy Policy from time to time. We will notify you of any changes by posting the new Privacy Policy on our GitHub repository. You are advised to review this Privacy Policy periodically for any changes.
|
||||
|
||||
## Contact Us
|
||||
|
||||
If you have any questions about this Privacy Policy, please contact us at:
|
||||
- **GitHub**: [https://github.com/fluzzi/connpy](https://github.com/fluzzi/connpy)
|
||||
@@ -1,16 +1,10 @@
|
||||
<p align="center">
|
||||
<img src="https://nginx.gederico.dynu.net/images/CONNPY-resized.png" alt="App Logo">
|
||||
</p>
|
||||
|
||||
|
||||
# Connpy
|
||||
# Conn
|
||||
[](https://pypi.org/pypi/connpy/)
|
||||
[](https://pypi.org/pypi/connpy/)
|
||||
[](https://github.com/fluzzi/connpy/blob/main/LICENSE)
|
||||
[](https://pypi.org/pypi/connpy/)
|
||||
|
||||
Connpy is a SSH, SFTP, Telnet, kubectl, Docker pod, and AWS SSM connection manager and automation module for Linux, Mac, and Docker.
|
||||
|
||||
Connpy is a ssh and telnet connection manager and automation module for Linux, Mac and Docker
|
||||
|
||||
## Installation
|
||||
|
||||
@@ -24,58 +18,33 @@ docker compose -f path/to/folder/docker-compose.yml run -it connpy-app
|
||||
```
|
||||
|
||||
## Connection manager
|
||||
### Privacy Policy
|
||||
|
||||
Connpy is committed to protecting your privacy. Our privacy policy explains how we handle user data:
|
||||
|
||||
- **Data Access**: Connpy accesses data necessary for managing remote host connections, including server addresses, usernames, and passwords. This data is stored locally on your machine and is not transmitted or shared with any third parties.
|
||||
- **Data Usage**: User data is used solely for the purpose of managing and automating SSH, Telnet, and SSM connections.
|
||||
- **Data Storage**: All connection details are stored locally and securely on your device. We do not store or process this data on our servers.
|
||||
- **Data Sharing**: We do not share any user data with third parties.
|
||||
|
||||
### Google Integration
|
||||
|
||||
Connpy integrates with Google services for backup purposes:
|
||||
|
||||
- **Configuration Backup**: The app allows users to store their device information in the app configuration. This configuration can be synced with Google services to create backups.
|
||||
- **Data Access**: Connpy only accesses its own files and does not access any other files on your Google account.
|
||||
- **Data Usage**: The data is used solely for backup and restore purposes, ensuring that your device information and configurations are safe and recoverable.
|
||||
- **Data Sharing**: Connpy does not share any user data with third parties, including Google. The backup data is only accessible by the user.
|
||||
|
||||
For more detailed information, please read our [Privacy Policy](https://connpy.gederico.dynu.net/fluzzi32/connpy/src/branch/main/PRIVATE_POLICY.md).
|
||||
|
||||
|
||||
### Features
|
||||
- Manage connections using SSH, SFTP, Telnet, kubectl, Docker exec, and AWS SSM.
|
||||
- Set contexts to manage specific nodes from specific contexts (work/home/clients/etc).
|
||||
- You can generate profiles and reference them from nodes using @profilename so you don't
|
||||
need to edit multiple nodes when changing passwords or other information.
|
||||
- Nodes can be stored on @folder or @subfolder@folder to organize your devices. They can
|
||||
be referenced using node@subfolder@folder or node@folder.
|
||||
- If you have too many nodes, get a completion script using: conn config --completion.
|
||||
Or use fzf by installing pyfzf and running conn config --fzf true.
|
||||
- Create in bulk, copy, move, export, and import nodes for easy management.
|
||||
- Run automation scripts on network devices.
|
||||
- Use AI with a multi-agent system (Engineer/Architect) to manage devices.
|
||||
Supports any LLM provider via litellm (OpenAI, Anthropic, Google, etc.).
|
||||
Features streaming responses, interactive chat, and extensible plugin tools.
|
||||
- Add plugins with your own scripts, and execute them remotely.
|
||||
- Fully decoupled gRPC Client/Server architecture.
|
||||
- Unified UI with syntax highlighting and theming.
|
||||
- You can generate profiles and reference them from nodes using @profilename so you dont
|
||||
need to edit multiple nodes when changing password or other information.
|
||||
- Nodes can be stored on @folder or @subfolder@folder to organize your devices. Then can
|
||||
be referenced using node@subfolder@folder or node@folder
|
||||
- If you have too many nodes. Get completion script using: conn config --completion.
|
||||
Or use fzf installing pyfzf and running conn config --fzf true
|
||||
- Create in bulk, copy, move, export and import nodes for easy management.
|
||||
- Run automation scripts in network devices.
|
||||
- use GPT AI to help you manage your devices.
|
||||
- Add plugins with your own scripts.
|
||||
- Much more!
|
||||
|
||||
### Usage:
|
||||
```
|
||||
usage: conn [-h] [--add | --del | --mod | --show | --debug] [node|folder] [--sftp]
|
||||
conn {profile,move,mv,copy,cp,list,ls,bulk,export,import,ai,run,api,plugin,config,sync,context} ...
|
||||
conn {profile,move,mv,copy,cp,list,ls,bulk,export,import,ai,run,api,plugin,config} ...
|
||||
|
||||
positional arguments:
|
||||
node|folder node[@subfolder][@folder]
|
||||
Connect to specific node or show all matching nodes
|
||||
[@subfolder][@folder]
|
||||
Show all available connections globally or in specified path
|
||||
node|folder node[@subfolder][@folder]
|
||||
Connect to specific node or show all matching nodes
|
||||
[@subfolder][@folder]
|
||||
Show all available connections globaly or in specified path
|
||||
```
|
||||
|
||||
options:
|
||||
### Options:
|
||||
```
|
||||
-h, --help show this help message and exit
|
||||
-v, --version Show version
|
||||
-a, --add Add new node[@subfolder][@folder] or [@subfolder]@folder
|
||||
@@ -84,11 +53,10 @@ options:
|
||||
-s, --show Show node[@subfolder][@folder]
|
||||
-d, --debug Display all conections steps
|
||||
-t, --sftp Connects using sftp instead of ssh
|
||||
--service-mode Set the backend service mode (local or remote)
|
||||
--remote Connect to a remote connpy service via gRPC
|
||||
--theme UI Output theme (dark, light, or path)
|
||||
```
|
||||
|
||||
Commands:
|
||||
### Commands:
|
||||
```
|
||||
profile Manage profiles
|
||||
move(mv) Move node
|
||||
copy(cp) Copy node
|
||||
@@ -102,7 +70,6 @@ Commands:
|
||||
plugin Manage plugins
|
||||
config Manage app config
|
||||
sync Sync config with Google
|
||||
context Manage contexts with regex matching
|
||||
```
|
||||
|
||||
### Manage profiles:
|
||||
@@ -123,35 +90,17 @@ options:
|
||||
|
||||
### Examples:
|
||||
```
|
||||
#Add new profile
|
||||
conn profile --add office-user
|
||||
#Add new folder
|
||||
conn --add @office
|
||||
#Add new subfolder
|
||||
conn --add @datacenter@office
|
||||
#Add node to subfolder
|
||||
conn --add server@datacenter@office
|
||||
#Add node to folder
|
||||
conn --add pc@office
|
||||
#Show node information
|
||||
conn --show server@datacenter@office
|
||||
#Connect to nodes
|
||||
conn pc@office
|
||||
conn server
|
||||
#Create and set new context
|
||||
conn context -a office .*@office
|
||||
conn context --set office
|
||||
#Run a command in a node
|
||||
conn run server ls -la
|
||||
```
|
||||
## Plugin Requirements for Connpy
|
||||
|
||||
### Remote Plugin Execution
|
||||
When Connpy operates in remote mode, plugins are executed **transparently on the server**:
|
||||
- The client automatically downloads the plugin source code (`Parser` class context) to generate the local `argparse` structure and provide autocompletion.
|
||||
- The execution phase (`Entrypoint` class) is redirected via gRPC streams to execute in the server's memory, ensuring the plugin runs securely against the server's inventory without passing sensitive data to the client.
|
||||
- You can manage remote plugins using the `--remote` flag (e.g. `connpy plugin --add myplugin script.py --remote`).
|
||||
|
||||
### General Structure
|
||||
- The plugin script must be a Python file.
|
||||
- Only the following top-level elements are allowed in the plugin script:
|
||||
@@ -167,8 +116,9 @@ When Connpy operates in remote mode, plugins are executed **transparently on the
|
||||
- **Purpose**: Handles parsing of command-line arguments.
|
||||
- **Requirements**:
|
||||
- Must contain only one method: `__init__`.
|
||||
- The `__init__` method must initialize at least one attribute:
|
||||
- The `__init__` method must initialize at least two attributes:
|
||||
- `self.parser`: An instance of `argparse.ArgumentParser`.
|
||||
- `self.description`: A string containing the description of the parser.
|
||||
2. **Class `Entrypoint`**:
|
||||
- **Purpose**: Acts as the entry point for plugin execution, utilizing parsed arguments and integrating with the main application.
|
||||
- **Requirements**:
|
||||
@@ -265,94 +215,6 @@ There are 2 methods that allows you to define custom logic to be executed before
|
||||
- `if __name__ == "__main__":`
|
||||
- This block allows the plugin to be run as a standalone script for testing or independent use.
|
||||
|
||||
### Command Completion Support
|
||||
|
||||
Plugins can provide intelligent **tab completion** by defining autocompletion logic. There are two supported methods, with the tree-based approach being the most modern and recommended.
|
||||
|
||||
#### 1. Tree-based Completion (Recommended)
|
||||
|
||||
Define a function called `_connpy_tree` that returns a declarative navigation tree. This method is highly efficient, supports complex state loops, and is very simple to implement for most use cases.
|
||||
|
||||
```python
|
||||
def _connpy_tree(info=None):
|
||||
nodes = info.get("nodes", [])
|
||||
return {
|
||||
"__exclude_used__": True, # Filter out words already typed
|
||||
"__extra__": nodes, # Suggest nodes at this level
|
||||
"--format": ["json", "yaml", "table"], # Fixed suggestions
|
||||
"*": { # Wildcard matches any positional word
|
||||
"interface1": None,
|
||||
"interface2": None,
|
||||
"--verbose": None
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- **Keys**: Literal completions (exact matches).
|
||||
- **`*` Key**: A wildcard that matches any positional word typed by the user.
|
||||
- **`__extra__`**: A list or a callable `(words) -> list` that adds dynamic suggestions.
|
||||
- **`__exclude_used__`**: (Boolean) If True, automatically filters out words already present in the command line.
|
||||
|
||||
#### 2. Legacy Function-based Completion
|
||||
|
||||
For backward compatibility or highly custom logic, you can define `_connpy_completion`.
|
||||
|
||||
```python
|
||||
def _connpy_completion(wordsnumber, words, info=None):
|
||||
if wordsnumber == 3:
|
||||
return ["--help", "--verbose", "start", "stop"]
|
||||
|
||||
elif wordsnumber == 4 and words[2] == "start":
|
||||
return info["nodes"] # Suggest node names
|
||||
|
||||
return []
|
||||
```
|
||||
|
||||
| Parameter | Description |
|
||||
|----------------|-------------|
|
||||
| `wordsnumber` | Integer indicating the total number of words on the command line. For plugins, this typically starts at 3. |
|
||||
| `words` | A list of tokens (words) already typed. `words[0]` is always the name of the plugin. |
|
||||
| `info` | A dictionary of structured context data (`nodes`, `folders`, `profiles`, `config`). |
|
||||
|
||||
> In this example, if the user types `connpy myplugin start ` and presses Tab, it will suggest node names.
|
||||
|
||||
### Handling Unknown Arguments
|
||||
|
||||
Plugins can choose to accept and process unknown arguments that are **not explicitly defined** in the parser. To enable this behavior, the plugin must define the following hidden argument in its `Parser` class:
|
||||
|
||||
```
|
||||
self.parser.add_argument(
|
||||
"--unknown-args",
|
||||
action="store_true",
|
||||
default=True,
|
||||
help=argparse.SUPPRESS
|
||||
)
|
||||
```
|
||||
|
||||
#### Behavior:
|
||||
|
||||
- When this argument is present, Connpy will parse the known arguments and capture any extra (unknown) ones.
|
||||
- These unknown arguments will be passed to the plugin as `args.unknown_args` inside the `Entrypoint`.
|
||||
- If the user does not pass any unknown arguments, `args.unknown_args` will contain the default value (`True`, unless overridden).
|
||||
|
||||
#### Example:
|
||||
|
||||
If a plugin accepts unknown tcpdump flags like this:
|
||||
|
||||
```
|
||||
connpy myplugin -nn -s0
|
||||
```
|
||||
|
||||
And defines the hidden `--unknown-args` flag as shown above, then:
|
||||
|
||||
- `args.unknown_args` inside `Entrypoint.__init__()` will be: `['-nn', '-s0']`
|
||||
|
||||
> This allows the plugin to receive and process arguments intended for external tools (e.g., `tcpdump`) without argparse raising an error.
|
||||
|
||||
#### Note:
|
||||
|
||||
If a plugin does **not** define `--unknown-args`, any extra arguments passed will cause argparse to fail with an unrecognized arguments error.
|
||||
|
||||
### Script Verification
|
||||
- The `verify_script` method in `plugins.py` is used to check the plugin script's compliance with these standards.
|
||||
- Non-compliant scripts will be rejected to ensure consistency and proper functionality within the plugin system.
|
||||
@@ -438,90 +300,121 @@ for key in routers.result:
|
||||
print(key, ' ---> ', ("pass" if routers.result[key] else "fail"))
|
||||
```
|
||||
### Using AI
|
||||
The AI module uses a multi-agent architecture with an **Engineer** (fast execution) and an **Architect** (strategic reasoning). It supports any LLM provider through [litellm](https://github.com/BerriAI/litellm).
|
||||
```python
|
||||
```
|
||||
import connpy
|
||||
conf = connpy.configfile()
|
||||
# Uses models and API keys from config, or override them:
|
||||
myai = connpy.ai(conf, engineer_model="gemini/gemini-2.5-flash", engineer_api_key="your-key")
|
||||
result = myai.ask("go to router1 and show me the running configuration")
|
||||
print(result["response"])
|
||||
# Streaming is enabled by default for CLI, disable for programmatic use:
|
||||
result = myai.ask("show interfaces on all routers", stream=False)
|
||||
print(result["response"])
|
||||
organization = 'openai-org'
|
||||
api_key = "openai-key"
|
||||
myia = connpy.ai(conf, organization, api_key)
|
||||
input = "go to router 1 and get me the full configuration"
|
||||
result = myia.ask(input, dryrun = False)
|
||||
print(result)
|
||||
```
|
||||
## http API
|
||||
With the Connpy API you can run commands on devices using http requests
|
||||
|
||||
### 1. List Nodes
|
||||
|
||||
**Endpoint**: `/list_nodes`
|
||||
|
||||
**Method**: `POST`
|
||||
|
||||
**Description**: This route returns a list of nodes. It can also filter the list based on a given keyword.
|
||||
|
||||
#### Request Body:
|
||||
|
||||
```json
|
||||
{
|
||||
"filter": "<keyword>"
|
||||
}
|
||||
```
|
||||
|
||||
#### AI Plugin Tool Registration
|
||||
Plugins can extend the AI system by registering custom tools via the `Preload` class:
|
||||
```python
|
||||
def _register_my_tools(ai_instance):
|
||||
tool_def = {
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "my_custom_tool",
|
||||
"description": "Does something useful.",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {"query": {"type": "string"}},
|
||||
"required": ["query"]
|
||||
}
|
||||
}
|
||||
}
|
||||
ai_instance.register_ai_tool(
|
||||
tool_definition=tool_def,
|
||||
handler=my_handler_function,
|
||||
target="engineer", # or "architect" or "both"
|
||||
engineer_prompt="- My tool: does X.",
|
||||
architect_prompt=" * My tool (my_custom_tool)."
|
||||
)
|
||||
* `filter` (optional): A keyword to filter the list of nodes. It returns only the nodes that contain the keyword. If not provided, the route will return the entire list of nodes.
|
||||
|
||||
class Preload:
|
||||
def __init__(self, connapp):
|
||||
connapp.ai.modify(_register_my_tools)
|
||||
```
|
||||
## gRPC Service Architecture
|
||||
Connpy features a completely decoupled gRPC Client/Server architecture. You can run Connpy as a standalone background service and connect to it remotely via the CLI or other clients.
|
||||
#### Response:
|
||||
|
||||
### 1. Start the Server
|
||||
Start the gRPC service by running:
|
||||
```bash
|
||||
connpy api -s 50051
|
||||
```
|
||||
The server will handle all configurations, connections, AI sessions, and plugin execution locally on the machine it runs on.
|
||||
- A JSON array containing the filtered list of nodes.
|
||||
|
||||
### 2. Connect the Client
|
||||
Configure your local CLI client to connect to the remote server:
|
||||
```bash
|
||||
connpy config --service-mode remote
|
||||
connpy config --remote-host localhost:50051
|
||||
```
|
||||
Once configured, all commands (`connpy node`, `connpy list`, `connpy ai`, etc.) will execute transparently on the remote server via thin-client proxies. You can revert back to standalone execution at any time by running `connpy config --service-mode local`.
|
||||
---
|
||||
|
||||
### Programmatic Access (gRPC & SOA)
|
||||
If you wish to build your own application (Web, Desktop, or Scripts) using the Connpy backend, you can use the `ServiceProvider` to interact with either a local or remote service transparently.
|
||||
### 2. Get Nodes
|
||||
|
||||
```python
|
||||
import connpy
|
||||
from connpy.services.provider import ServiceProvider
|
||||
**Endpoint**: `/get_nodes`
|
||||
|
||||
# Initialize local config
|
||||
config = connpy.configfile()
|
||||
**Method**: `POST`
|
||||
|
||||
# Connect to the remote gRPC service
|
||||
services = ServiceProvider(
|
||||
config,
|
||||
mode="remote",
|
||||
remote_host="localhost:50051"
|
||||
)
|
||||
**Description**: This route returns a dictionary of nodes with all their attributes. It can also filter the nodes based on a given keyword.
|
||||
|
||||
# Use any service (the logic is identical to local mode)
|
||||
nodes = services.nodes.list_nodes()
|
||||
for name in nodes:
|
||||
print(f"Found node: {name}")
|
||||
#### Request Body:
|
||||
|
||||
# Run a command remotely via streaming
|
||||
for chunk in services.execution.run_commands(nodes=["server1"], commands=["uptime"]):
|
||||
print(chunk["output"], end="")
|
||||
```json
|
||||
{
|
||||
"filter": "<keyword>"
|
||||
}
|
||||
```
|
||||
|
||||
* `filter` (optional): A keyword to filter the nodes. It returns only the nodes that contain the keyword. If not provided, the route will return the entire list of nodes.
|
||||
|
||||
#### Response:
|
||||
|
||||
- A JSON array containing the filtered nodes.
|
||||
|
||||
---
|
||||
|
||||
### 3. Run Commands
|
||||
|
||||
**Endpoint**: `/run_commands`
|
||||
|
||||
**Method**: `POST`
|
||||
|
||||
**Description**: This route runs commands on selected nodes based on the provided action, nodes, and commands. It also supports executing tests by providing expected results.
|
||||
|
||||
#### Request Body:
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "<action>",
|
||||
"nodes": "<nodes>",
|
||||
"commands": "<commands>",
|
||||
"expected": "<expected>",
|
||||
"options": "<options>"
|
||||
}
|
||||
```
|
||||
|
||||
* `action` (required): The action to be performed. Possible values: `run` or `test`.
|
||||
* `nodes` (required): A list of nodes or a single node on which the commands will be executed. The nodes can be specified as individual node names or a node group with the `@` prefix. Node groups can also be specified as arrays with a list of nodes inside the group.
|
||||
* `commands` (required): A list of commands to be executed on the specified nodes.
|
||||
* `expected` (optional, only used when the action is `test`): A single expected result for the test.
|
||||
* `options` (optional): Array to pass options to the run command, options are: `prompt`, `parallel`, `timeout`
|
||||
|
||||
#### Response:
|
||||
|
||||
- A JSON object with the results of the executed commands on the nodes.
|
||||
|
||||
---
|
||||
|
||||
### 4. Ask AI
|
||||
|
||||
**Endpoint**: `/ask_ai`
|
||||
|
||||
**Method**: `POST`
|
||||
|
||||
**Description**: This route sends to chatgpt IA a request that will parse it into an understandable output for the application and then run the request.
|
||||
|
||||
#### Request Body:
|
||||
|
||||
```json
|
||||
{
|
||||
"input": "<user input request>",
|
||||
"dryrun": true or false
|
||||
}
|
||||
```
|
||||
|
||||
* `input` (required): The user input requesting the AI to perform an action on some devices or get the devices list.
|
||||
* `dryrun` (optional): If set to true, it will return the parameters to run the request but it won't run it. default is false.
|
||||
|
||||
#### Response:
|
||||
|
||||
- A JSON array containing the action to run and the parameters and the result of the action.
|
||||
|
||||
|
||||
|
||||
+128
-203
@@ -2,38 +2,32 @@
|
||||
'''
|
||||
## Connection manager
|
||||
|
||||
Connpy is a SSH, SFTP, Telnet, kubectl, Docker pod, and AWS SSM connection manager and automation module for Linux, Mac, and Docker.
|
||||
Connpy is a connection manager that allows you to store nodes to connect them fast and password free.
|
||||
|
||||
### Features
|
||||
- Manage connections using SSH, SFTP, Telnet, kubectl, Docker exec, and AWS SSM.
|
||||
- Set contexts to manage specific nodes from specific contexts (work/home/clients/etc).
|
||||
- You can generate profiles and reference them from nodes using @profilename so you don't
|
||||
need to edit multiple nodes when changing passwords or other information.
|
||||
- Nodes can be stored on @folder or @subfolder@folder to organize your devices. They can
|
||||
be referenced using node@subfolder@folder or node@folder.
|
||||
- If you have too many nodes, get a completion script using: conn config --completion.
|
||||
Or use fzf by installing pyfzf and running conn config --fzf true.
|
||||
- Create in bulk, copy, move, export, and import nodes for easy management.
|
||||
- Run automation scripts on network devices.
|
||||
- Use AI with a multi-agent system (Engineer/Architect) to help you manage your devices.
|
||||
Supports any LLM provider via litellm (OpenAI, Anthropic, Google, etc.).
|
||||
- Add plugins with your own scripts, and execute them remotely.
|
||||
- Fully decoupled gRPC Client/Server architecture.
|
||||
- Unified UI with syntax highlighting and theming.
|
||||
- You can generate profiles and reference them from nodes using @profilename so you dont
|
||||
need to edit multiple nodes when changing password or other information.
|
||||
- Nodes can be stored on @folder or @subfolder@folder to organize your devices. Then can
|
||||
be referenced using node@subfolder@folder or node@folder
|
||||
- If you have too many nodes. Get completion script using: conn config --completion.
|
||||
Or use fzf installing pyfzf and running conn config --fzf true
|
||||
- Create in bulk, copy, move, export and import nodes for easy management.
|
||||
- Run automation scripts in network devices.
|
||||
- use GPT AI to help you manage your devices.
|
||||
- Add plugins with your own scripts.
|
||||
- Much more!
|
||||
|
||||
### Usage
|
||||
```
|
||||
usage: conn [-h] [--add | --del | --mod | --show | --debug] [node|folder] [--sftp]
|
||||
conn {profile,move,mv,copy,cp,list,ls,bulk,export,import,ai,run,api,plugin,config,sync,context} ...
|
||||
conn {profile,move,mv,copy,cp,list,ls,bulk,export,import,ai,run,api,plugin,config} ...
|
||||
|
||||
positional arguments:
|
||||
node|folder node[@subfolder][@folder]
|
||||
Connect to specific node or show all matching nodes
|
||||
[@subfolder][@folder]
|
||||
Show all available connections globally or in specified path
|
||||
|
||||
options:
|
||||
node|folder node[@subfolder][@folder]
|
||||
Connect to specific node or show all matching nodes
|
||||
[@subfolder][@folder]
|
||||
Show all available connections globaly or in specified path
|
||||
Options:
|
||||
-h, --help show this help message and exit
|
||||
-v, --version Show version
|
||||
-a, --add Add new node[@subfolder][@folder] or [@subfolder]@folder
|
||||
@@ -42,9 +36,6 @@ options:
|
||||
-s, --show Show node[@subfolder][@folder]
|
||||
-d, --debug Display all conections steps
|
||||
-t, --sftp Connects using sftp instead of ssh
|
||||
--service-mode Set the backend service mode (local or remote)
|
||||
--remote Connect to a remote connpy service via gRPC
|
||||
--theme UI Output theme (dark, light, or path)
|
||||
|
||||
Commands:
|
||||
profile Manage profiles
|
||||
@@ -60,7 +51,6 @@ Commands:
|
||||
plugin Manage plugins
|
||||
config Manage app config
|
||||
sync Sync config with Google
|
||||
context Manage contexts with regex matching
|
||||
```
|
||||
|
||||
### Manage profiles
|
||||
@@ -81,35 +71,16 @@ options:
|
||||
|
||||
### Examples
|
||||
```
|
||||
#Add new profile
|
||||
conn profile --add office-user
|
||||
#Add new folder
|
||||
conn --add @office
|
||||
#Add new subfolder
|
||||
conn --add @datacenter@office
|
||||
#Add node to subfolder
|
||||
conn --add server@datacenter@office
|
||||
#Add node to folder
|
||||
conn --add pc@office
|
||||
#Show node information
|
||||
conn --show server@datacenter@office
|
||||
#Connect to nodes
|
||||
conn pc@office
|
||||
conn server
|
||||
#Create and set new context
|
||||
conn context -a office .*@office
|
||||
conn context --set office
|
||||
#Run a command in a node
|
||||
conn run server ls -la
|
||||
```
|
||||
## Plugin Requirements for Connpy
|
||||
|
||||
### Remote Plugin Execution
|
||||
When Connpy operates in remote mode, plugins are executed **transparently on the server**:
|
||||
- The client automatically downloads the plugin source code (`Parser` class context) to generate the local `argparse` structure and provide autocompletion.
|
||||
- The execution phase (`Entrypoint` class) is redirected via gRPC streams to execute in the server's memory, ensuring the plugin runs securely against the server's inventory without passing sensitive data to the client.
|
||||
- You can manage remote plugins using the `--remote` flag (e.g. `connpy plugin --add myplugin script.py --remote`).
|
||||
|
||||
### General Structure
|
||||
- The plugin script must be a Python file.
|
||||
- Only the following top-level elements are allowed in the plugin script:
|
||||
@@ -125,8 +96,9 @@ When Connpy operates in remote mode, plugins are executed **transparently on the
|
||||
- **Purpose**: Handles parsing of command-line arguments.
|
||||
- **Requirements**:
|
||||
- Must contain only one method: `__init__`.
|
||||
- The `__init__` method must initialize at least one attribute:
|
||||
- The `__init__` method must initialize at least two attributes:
|
||||
- `self.parser`: An instance of `argparse.ArgumentParser`.
|
||||
- `self.description`: A string containing the description of the parser.
|
||||
2. **Class `Entrypoint`**:
|
||||
- **Purpose**: Acts as the entry point for plugin execution, utilizing parsed arguments and integrating with the main application.
|
||||
- **Requirements**:
|
||||
@@ -222,94 +194,6 @@ There are 2 methods that allows you to define custom logic to be executed before
|
||||
- `if __name__ == "__main__":`
|
||||
- This block allows the plugin to be run as a standalone script for testing or independent use.
|
||||
|
||||
### Command Completion Support
|
||||
|
||||
Plugins can provide intelligent **tab completion** by defining autocompletion logic. There are two supported methods, with the tree-based approach being the most modern and recommended.
|
||||
|
||||
#### 1. Tree-based Completion (Recommended)
|
||||
|
||||
Define a function called `_connpy_tree` that returns a declarative navigation tree. This method is highly efficient, supports complex state loops, and is very simple to implement for most use cases.
|
||||
|
||||
```python
|
||||
def _connpy_tree(info=None):
|
||||
nodes = info.get("nodes", [])
|
||||
return {
|
||||
"__exclude_used__": True, # Filter out words already typed
|
||||
"__extra__": nodes, # Suggest nodes at this level
|
||||
"--format": ["json", "yaml", "table"], # Fixed suggestions
|
||||
"*": { # Wildcard matches any positional word
|
||||
"interface1": None,
|
||||
"interface2": None,
|
||||
"--verbose": None
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- **Keys**: Literal completions (exact matches).
|
||||
- **`*` Key**: A wildcard that matches any positional word typed by the user.
|
||||
- **`__extra__`**: A list or a callable `(words) -> list` that adds dynamic suggestions.
|
||||
- **`__exclude_used__`**: (Boolean) If True, automatically filters out words already present in the command line.
|
||||
|
||||
#### 2. Legacy Function-based Completion
|
||||
|
||||
For backward compatibility or highly custom logic, you can define `_connpy_completion`.
|
||||
|
||||
```python
|
||||
def _connpy_completion(wordsnumber, words, info=None):
|
||||
if wordsnumber == 3:
|
||||
return ["--help", "--verbose", "start", "stop"]
|
||||
|
||||
elif wordsnumber == 4 and words[2] == "start":
|
||||
return info["nodes"] # Suggest node names
|
||||
|
||||
return []
|
||||
```
|
||||
|
||||
| Parameter | Description |
|
||||
|----------------|-------------|
|
||||
| `wordsnumber` | Integer indicating the total number of words on the command line. For plugins, this typically starts at 3. |
|
||||
| `words` | A list of tokens (words) already typed. `words[0]` is always the name of the plugin. |
|
||||
| `info` | A dictionary of structured context data (`nodes`, `folders`, `profiles`, `config`). |
|
||||
|
||||
> In this example, if the user types `connpy myplugin start ` and presses Tab, it will suggest node names.
|
||||
|
||||
### Handling Unknown Arguments
|
||||
|
||||
Plugins can choose to accept and process unknown arguments that are **not explicitly defined** in the parser. To enable this behavior, the plugin must define the following hidden argument in its `Parser` class:
|
||||
|
||||
```
|
||||
self.parser.add_argument(
|
||||
"--unknown-args",
|
||||
action="store_true",
|
||||
default=True,
|
||||
help=argparse.SUPPRESS
|
||||
)
|
||||
```
|
||||
|
||||
#### Behavior:
|
||||
|
||||
- When this argument is present, Connpy will parse the known arguments and capture any extra (unknown) ones.
|
||||
- These unknown arguments will be passed to the plugin as `args.unknown_args` inside the `Entrypoint`.
|
||||
- If the user does not pass any unknown arguments, `args.unknown_args` will contain the default value (`True`, unless overridden).
|
||||
|
||||
#### Example:
|
||||
|
||||
If a plugin accepts unknown tcpdump flags like this:
|
||||
|
||||
```
|
||||
connpy myplugin -nn -s0
|
||||
```
|
||||
|
||||
And defines the hidden `--unknown-args` flag as shown above, then:
|
||||
|
||||
- `args.unknown_args` inside `Entrypoint.__init__()` will be: `['-nn', '-s0']`
|
||||
|
||||
> This allows the plugin to receive and process arguments intended for external tools (e.g., `tcpdump`) without argparse raising an error.
|
||||
|
||||
#### Note:
|
||||
|
||||
If a plugin does **not** define `--unknown-args`, any extra arguments passed will cause argparse to fail with an unrecognized arguments error.
|
||||
|
||||
### Script Verification
|
||||
- The `verify_script` method in `plugins.py` is used to check the plugin script's compliance with these standards.
|
||||
- Non-compliant scripts will be rejected to ensure consistency and proper functionality within the plugin system.
|
||||
@@ -322,33 +206,112 @@ For a practical example of how to write a compatible plugin script, please refer
|
||||
|
||||
This script demonstrates the required structure and implementation details according to the plugin system's standards.
|
||||
|
||||
## gRPC Service Architecture
|
||||
Connpy features a completely decoupled gRPC Client/Server architecture. You can run Connpy as a standalone background service and connect to it remotely via the CLI or other clients.
|
||||
## http API
|
||||
With the Connpy API you can run commands on devices using http requests
|
||||
|
||||
### 1. Start the Server
|
||||
Start the gRPC service by running:
|
||||
```bash
|
||||
connpy api -s 50051
|
||||
```
|
||||
The server will handle all configurations, connections, AI sessions, and plugin execution locally on the machine it runs on.
|
||||
### 1. List Nodes
|
||||
|
||||
### 2. Connect the Client
|
||||
Configure your local CLI client to connect to the remote server:
|
||||
```bash
|
||||
connpy config --service-mode remote
|
||||
connpy config --remote-host localhost:50051
|
||||
```
|
||||
Once configured, all commands (`connpy node`, `connpy list`, `connpy ai`, etc.) will execute transparently on the remote server via thin-client proxies. You can revert back to standalone execution at any time by running `connpy config --service-mode local`.
|
||||
**Endpoint**: `/list_nodes`
|
||||
|
||||
### Programmatic Access (gRPC & SOA)
|
||||
Developers can build their own applications using the Connpy backend by utilizing the `ServiceProvider`:
|
||||
**Method**: `POST`
|
||||
|
||||
```python
|
||||
from connpy.services.provider import ServiceProvider
|
||||
services = ServiceProvider(config, mode="remote", remote_host="localhost:50051")
|
||||
nodes = services.nodes.list_nodes()
|
||||
**Description**: This route returns a list of nodes. It can also filter the list based on a given keyword.
|
||||
|
||||
#### Request Body:
|
||||
|
||||
```json
|
||||
{
|
||||
"filter": "<keyword>"
|
||||
}
|
||||
```
|
||||
|
||||
* `filter` (optional): A keyword to filter the list of nodes. It returns only the nodes that contain the keyword. If not provided, the route will return the entire list of nodes.
|
||||
|
||||
#### Response:
|
||||
|
||||
- A JSON array containing the filtered list of nodes.
|
||||
|
||||
---
|
||||
|
||||
### 2. Get Nodes
|
||||
|
||||
**Endpoint**: `/get_nodes`
|
||||
|
||||
**Method**: `POST`
|
||||
|
||||
**Description**: This route returns a dictionary of nodes with all their attributes. It can also filter the nodes based on a given keyword.
|
||||
|
||||
#### Request Body:
|
||||
|
||||
```json
|
||||
{
|
||||
"filter": "<keyword>"
|
||||
}
|
||||
```
|
||||
|
||||
* `filter` (optional): A keyword to filter the nodes. It returns only the nodes that contain the keyword. If not provided, the route will return the entire list of nodes.
|
||||
|
||||
#### Response:
|
||||
|
||||
- A JSON array containing the filtered nodes.
|
||||
|
||||
---
|
||||
|
||||
### 3. Run Commands
|
||||
|
||||
**Endpoint**: `/run_commands`
|
||||
|
||||
**Method**: `POST`
|
||||
|
||||
**Description**: This route runs commands on selected nodes based on the provided action, nodes, and commands. It also supports executing tests by providing expected results.
|
||||
|
||||
#### Request Body:
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "<action>",
|
||||
"nodes": "<nodes>",
|
||||
"commands": "<commands>",
|
||||
"expected": "<expected>",
|
||||
"options": "<options>"
|
||||
}
|
||||
```
|
||||
|
||||
* `action` (required): The action to be performed. Possible values: `run` or `test`.
|
||||
* `nodes` (required): A list of nodes or a single node on which the commands will be executed. The nodes can be specified as individual node names or a node group with the `@` prefix. Node groups can also be specified as arrays with a list of nodes inside the group.
|
||||
* `commands` (required): A list of commands to be executed on the specified nodes.
|
||||
* `expected` (optional, only used when the action is `test`): A single expected result for the test.
|
||||
* `options` (optional): Array to pass options to the run command, options are: `prompt`, `parallel`, `timeout`
|
||||
|
||||
#### Response:
|
||||
|
||||
- A JSON object with the results of the executed commands on the nodes.
|
||||
|
||||
---
|
||||
|
||||
### 4. Ask AI
|
||||
|
||||
**Endpoint**: `/ask_ai`
|
||||
|
||||
**Method**: `POST`
|
||||
|
||||
**Description**: This route sends to chatgpt IA a request that will parse it into an understandable output for the application and then run the request.
|
||||
|
||||
#### Request Body:
|
||||
|
||||
```json
|
||||
{
|
||||
"input": "<user input request>",
|
||||
"dryrun": true or false
|
||||
}
|
||||
```
|
||||
|
||||
* `input` (required): The user input requesting the AI to perform an action on some devices or get the devices list.
|
||||
* `dryrun` (optional): If set to true, it will return the parameters to run the request but it won't run it. default is false.
|
||||
|
||||
#### Response:
|
||||
|
||||
- A JSON array containing the action to run and the parameters and the result of the action.
|
||||
|
||||
## Automation module
|
||||
The automation module
|
||||
@@ -427,50 +390,13 @@ for key in routers.result:
|
||||
```
|
||||
import connpy
|
||||
conf = connpy.configfile()
|
||||
# Uses models and API keys from config, or override them:
|
||||
myai = connpy.ai(conf, engineer_model="gemini/gemini-2.5-flash", engineer_api_key="your-key")
|
||||
result = myai.ask("go to router1 and show me the running configuration")
|
||||
print(result["response"])
|
||||
# Streaming is enabled by default for CLI, disable for programmatic use:
|
||||
result = myai.ask("show interfaces on all routers", stream=False)
|
||||
print(result["response"])
|
||||
organization = 'openai-org'
|
||||
api_key = "openai-key"
|
||||
myia = connpy.ai(conf, organization, api_key)
|
||||
input = "go to router 1 and get me the full configuration"
|
||||
result = myia.ask(input, dryrun = False)
|
||||
print(result)
|
||||
```
|
||||
|
||||
#### AI Plugin Tool Registration
|
||||
Plugins can register custom tools with the AI system using `register_ai_tool()` in their `Preload` class:
|
||||
```
|
||||
def _register_my_tools(ai_instance):
|
||||
tool_def = {
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "my_custom_tool",
|
||||
"description": "Does something useful.",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {"query": {"type": "string"}},
|
||||
"required": ["query"]
|
||||
}
|
||||
}
|
||||
}
|
||||
ai_instance.register_ai_tool(
|
||||
tool_definition=tool_def,
|
||||
handler=my_handler_function,
|
||||
target="engineer", # or "architect" or "both"
|
||||
engineer_prompt="- My tool: does X.",
|
||||
architect_prompt=" * My tool (my_custom_tool)."
|
||||
)
|
||||
|
||||
class Preload:
|
||||
def __init__(self, connapp):
|
||||
connapp.ai.modify(_register_my_tools)
|
||||
```
|
||||
|
||||
## Developer Notes (SOA Architecture)
|
||||
As of version 2.0, Connpy has migrated to a **Service-Oriented Architecture (SOA)**:
|
||||
- **`connpy/cli/`**: Contains all CLI handlers. These are responsible for argument parsing, user interaction (via `inquirer`), and visual output (via `printer`).
|
||||
- **`connpy/services/`**: Contains pure logic services (Node, Profile, Execution, etc.).
|
||||
- **Zero-Print Policy**: Services must never use `print()`. All output must be returned as data structures or generators to the caller (CLI handlers).
|
||||
- **ServiceProvider**: Access services via `connapp.services`. This allows transparent switching between local and remote (gRPC) backends without modifying CLI logic.
|
||||
'''
|
||||
from .core import node,nodes
|
||||
from .configfile import configfile
|
||||
@@ -479,9 +405,9 @@ from .api import *
|
||||
from .ai import ai
|
||||
from .plugins import Plugins
|
||||
from ._version import __version__
|
||||
from . import printer
|
||||
from pkg_resources import get_distribution
|
||||
|
||||
__all__ = ["node", "nodes", "configfile", "connapp", "ai", "Plugins", "printer"]
|
||||
__all__ = ["node", "nodes", "configfile", "connapp", "ai", "Plugins"]
|
||||
__author__ = "Federico Luzzi"
|
||||
__pdoc__ = {
|
||||
'core': False,
|
||||
@@ -496,6 +422,5 @@ __pdoc__ = {
|
||||
'node.deferred_class_hooks': False,
|
||||
'nodes.deferred_class_hooks': False,
|
||||
'connapp': False,
|
||||
'connapp.encrypt': True,
|
||||
'printer': False
|
||||
'connapp.encrypt': True
|
||||
}
|
||||
|
||||
+2
-1
@@ -1 +1,2 @@
|
||||
__version__ = "6.0.0b6"
|
||||
__version__ = "4.0.0"
|
||||
|
||||
|
||||
+448
-1194
File diff suppressed because it is too large
Load Diff
+152
-72
@@ -1,106 +1,186 @@
|
||||
from flask import Flask, request, jsonify
|
||||
from connpy import configfile, node, nodes, hooks
|
||||
from connpy.ai import ai as myai
|
||||
from waitress import serve
|
||||
import os
|
||||
import signal
|
||||
import time
|
||||
|
||||
# Suppress harmless but noisy gRPC fork() warnings from pexpect child processes
|
||||
os.environ["GRPC_VERBOSITY"] = "NONE"
|
||||
os.environ["GRPC_ENABLE_FORK_SUPPORT"] = "0"
|
||||
|
||||
from connpy import hooks, printer
|
||||
from connpy.configfile import configfile
|
||||
app = Flask(__name__)
|
||||
conf = configfile()
|
||||
|
||||
PID_FILE1 = "/run/connpy.pid"
|
||||
PID_FILE2 = "/tmp/connpy.pid"
|
||||
|
||||
def _wait_for_termination():
|
||||
try:
|
||||
while True:
|
||||
time.sleep(86400)
|
||||
except KeyboardInterrupt:
|
||||
pass
|
||||
|
||||
@app.route("/")
|
||||
def root():
|
||||
return jsonify({
|
||||
'message': 'Welcome to Connpy api',
|
||||
'version': '1.0',
|
||||
'documentation': 'https://fluzzi.github.io/connpy/'
|
||||
})
|
||||
|
||||
@app.route("/list_nodes", methods=["POST"])
|
||||
def list_nodes():
|
||||
conf = app.custom_config
|
||||
case = conf.config["case"]
|
||||
try:
|
||||
data = request.get_json()
|
||||
filter = data["filter"]
|
||||
if not case:
|
||||
if isinstance(filter, list):
|
||||
filter = [item.lower() for item in filter]
|
||||
else:
|
||||
filter = filter.lower()
|
||||
output = conf._getallnodes(filter)
|
||||
except:
|
||||
output = conf._getallnodes()
|
||||
return jsonify(output)
|
||||
|
||||
@app.route("/get_nodes", methods=["POST"])
|
||||
def get_nodes():
|
||||
conf = app.custom_config
|
||||
case = conf.config["case"]
|
||||
try:
|
||||
data = request.get_json()
|
||||
filter = data["filter"]
|
||||
if not case:
|
||||
if isinstance(filter, list):
|
||||
filter = [item.lower() for item in filter]
|
||||
else:
|
||||
filter = filter.lower()
|
||||
output = conf._getallnodesfull(filter)
|
||||
except:
|
||||
output = conf._getallnodesfull()
|
||||
return jsonify(output)
|
||||
|
||||
@app.route("/ask_ai", methods=["POST"])
|
||||
def ask_ai():
|
||||
conf = app.custom_config
|
||||
data = request.get_json()
|
||||
input = data["input"]
|
||||
if "dryrun" in data:
|
||||
dryrun = data["dryrun"]
|
||||
else:
|
||||
dryrun = False
|
||||
if "chat_history" in data:
|
||||
chat_history = data["chat_history"]
|
||||
else:
|
||||
chat_history = None
|
||||
ai = myai(conf)
|
||||
return ai.ask(input, dryrun, chat_history)
|
||||
|
||||
@app.route("/confirm", methods=["POST"])
|
||||
def confirm():
|
||||
conf = app.custom_config
|
||||
data = request.get_json()
|
||||
input = data["input"]
|
||||
ai = myai(conf)
|
||||
return str(ai.confirm(input))
|
||||
|
||||
@app.route("/run_commands", methods=["POST"])
|
||||
def run_commands():
|
||||
conf = app.custom_config
|
||||
data = request.get_json()
|
||||
case = conf.config["case"]
|
||||
mynodes = {}
|
||||
args = {}
|
||||
try:
|
||||
action = data["action"]
|
||||
nodelist = data["nodes"]
|
||||
args["commands"] = data["commands"]
|
||||
if action == "test":
|
||||
args["expected"] = data["expected"]
|
||||
except KeyError as e:
|
||||
error = "'{}' is mandatory".format(e.args[0])
|
||||
return({"DataError": error})
|
||||
if isinstance(nodelist, list):
|
||||
mynodes = conf.getitems(nodelist)
|
||||
else:
|
||||
if not case:
|
||||
nodelist = nodelist.lower()
|
||||
if nodelist.startswith("@"):
|
||||
mynodes = conf.getitem(nodelist)
|
||||
else:
|
||||
mynodes[nodelist] = conf.getitem(nodelist)
|
||||
|
||||
mynodes = nodes(mynodes, config=conf)
|
||||
try:
|
||||
args["vars"] = data["vars"]
|
||||
except:
|
||||
pass
|
||||
try:
|
||||
options = data["options"]
|
||||
thisoptions = {k: v for k, v in options.items() if k in ["prompt", "parallel", "timeout"]}
|
||||
args.update(thisoptions)
|
||||
except:
|
||||
options = None
|
||||
if action == "run":
|
||||
output = mynodes.run(**args)
|
||||
elif action == "test":
|
||||
output = {}
|
||||
output["result"] = mynodes.test(**args)
|
||||
output["output"] = mynodes.output
|
||||
else:
|
||||
error = "Wrong action '{}'".format(action)
|
||||
return({"DataError": error})
|
||||
return output
|
||||
|
||||
@hooks.MethodHook
|
||||
def stop_api():
|
||||
# Read the process ID (pid) from the file
|
||||
try:
|
||||
with open(PID_FILE1, "r") as f:
|
||||
pid = int(f.readline().strip())
|
||||
port_line = f.readline().strip()
|
||||
port = int(port_line) if port_line else None
|
||||
PID_FILE = PID_FILE1
|
||||
except (FileNotFoundError, ValueError, OSError):
|
||||
port = int(f.readline().strip())
|
||||
PID_FILE=PID_FILE1
|
||||
except:
|
||||
try:
|
||||
with open(PID_FILE2, "r") as f:
|
||||
pid = int(f.readline().strip())
|
||||
port_line = f.readline().strip()
|
||||
port = int(port_line) if port_line else None
|
||||
PID_FILE = PID_FILE2
|
||||
except (FileNotFoundError, ValueError, OSError):
|
||||
printer.warning("Connpy API server is not running.")
|
||||
return None
|
||||
port = int(f.readline().strip())
|
||||
PID_FILE=PID_FILE2
|
||||
except:
|
||||
print("Connpy api server is not running.")
|
||||
return
|
||||
# Send a SIGTERM signal to the process
|
||||
try:
|
||||
os.kill(pid, signal.SIGTERM)
|
||||
except OSError as e:
|
||||
printer.warning(f"Process kill failed (maybe already dead): {e}")
|
||||
except:
|
||||
pass
|
||||
# Delete the PID file
|
||||
os.remove(PID_FILE)
|
||||
printer.info(f"Server with process ID {pid} stopped.")
|
||||
print(f"Server with process ID {pid} stopped.")
|
||||
return port
|
||||
|
||||
def debug_api(port=8048, config=None):
|
||||
from .grpc_layer.server import serve
|
||||
conf = config or configfile()
|
||||
server = serve(conf, port=port, debug=True)
|
||||
printer.info(f"gRPC Server running in debug mode on port {port}...")
|
||||
_wait_for_termination()
|
||||
server.stop(0)
|
||||
@hooks.MethodHook
|
||||
def debug_api(port=8048):
|
||||
app.custom_config = configfile()
|
||||
app.run(debug=True, port=port)
|
||||
|
||||
def start_server(port=8048, config=None):
|
||||
try:
|
||||
import sys
|
||||
# Ensure project root is in path for the child process
|
||||
base_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||
if base_dir not in sys.path:
|
||||
sys.path.insert(0, base_dir)
|
||||
|
||||
from connpy.grpc_layer.server import serve
|
||||
conf = config or configfile()
|
||||
server = serve(conf, port=port, debug=False)
|
||||
_wait_for_termination()
|
||||
except Exception as e:
|
||||
printer.error(f"Background API failed to start: {e}")
|
||||
os._exit(1)
|
||||
|
||||
def start_api(port=8048, config=None):
|
||||
# Check if already running via PID file verification
|
||||
for pid_file in [PID_FILE1, PID_FILE2]:
|
||||
if os.path.exists(pid_file):
|
||||
try:
|
||||
with open(pid_file, "r") as f:
|
||||
pid = int(f.readline().strip())
|
||||
os.kill(pid, 0)
|
||||
# If we get here, process exists
|
||||
printer.info(f"API is already running (PID {pid})")
|
||||
return
|
||||
except (ValueError, OSError, ProcessLookupError):
|
||||
# Stale PID file, ignore here, start_api will overwrite
|
||||
pass
|
||||
@hooks.MethodHook
|
||||
def start_server(port=8048):
|
||||
app.custom_config = configfile()
|
||||
serve(app, host='0.0.0.0', port=port)
|
||||
|
||||
@hooks.MethodHook
|
||||
def start_api(port=8048):
|
||||
if os.path.exists(PID_FILE1) or os.path.exists(PID_FILE2):
|
||||
print("Connpy server is already running.")
|
||||
return
|
||||
pid = os.fork()
|
||||
if pid == 0:
|
||||
# Child process: detached from terminal
|
||||
os.setsid()
|
||||
start_server(port, config=config)
|
||||
start_server(port)
|
||||
else:
|
||||
# Parent process: record PID and exit
|
||||
try:
|
||||
with open(PID_FILE1, "w") as f:
|
||||
f.write(str(pid) + "\n" + str(port))
|
||||
except OSError:
|
||||
except:
|
||||
try:
|
||||
with open(PID_FILE2, "w") as f:
|
||||
f.write(str(pid) + "\n" + str(port))
|
||||
except OSError:
|
||||
printer.error("Couldn't create PID file.")
|
||||
exit(1)
|
||||
printer.start(f"gRPC Server started with process ID {pid} on port {port}")
|
||||
except:
|
||||
print("Cound't create PID file")
|
||||
return
|
||||
print(f'Server is running with process ID {pid} in port {port}')
|
||||
|
||||
|
||||
@@ -1,10 +0,0 @@
|
||||
from .node_handler import NodeHandler
|
||||
from .profile_handler import ProfileHandler
|
||||
from .config_handler import ConfigHandler
|
||||
from .run_handler import RunHandler
|
||||
from .ai_handler import AIHandler
|
||||
from .api_handler import APIHandler
|
||||
from .plugin_handler import PluginHandler
|
||||
from .import_export_handler import ImportExportHandler
|
||||
from .context_handler import ContextHandler
|
||||
|
||||
@@ -1,136 +0,0 @@
|
||||
import sys
|
||||
from rich.panel import Panel
|
||||
from rich.markdown import Markdown
|
||||
from rich.rule import Rule
|
||||
from rich.prompt import Prompt
|
||||
|
||||
from .. import printer
|
||||
|
||||
console = printer.console
|
||||
mdprint = console.print
|
||||
|
||||
class AIHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
|
||||
def dispatch(self, args):
|
||||
if args.list_sessions:
|
||||
sessions = self.app.services.ai.list_sessions()
|
||||
if not sessions:
|
||||
printer.info("No saved AI sessions found.")
|
||||
return
|
||||
columns = ["ID", "Title", "Created At", "Model"]
|
||||
rows = [[s["id"], s["title"], s["created_at"], s["model"]] for s in sessions]
|
||||
printer.table("AI Persisted Sessions", columns, rows)
|
||||
return
|
||||
|
||||
if args.delete_session:
|
||||
try:
|
||||
self.app.services.ai.delete_session(args.delete_session[0])
|
||||
printer.success(f"Session {args.delete_session[0]} deleted.")
|
||||
except Exception as e:
|
||||
printer.error(str(e))
|
||||
return
|
||||
|
||||
# Determinar session_id para retomar
|
||||
session_id = None
|
||||
if args.resume:
|
||||
sessions = self.app.services.ai.list_sessions()
|
||||
session_id = sessions[0]["id"] if sessions else None
|
||||
if not session_id:
|
||||
printer.warning("No previous session found to resume.")
|
||||
elif args.session:
|
||||
session_id = args.session[0]
|
||||
|
||||
# Configurar argumentos adicionales para el servicio de AI
|
||||
# Prioridad: CLI Args > Configuración Local
|
||||
settings = self.app.services.config_svc.get_settings().get("ai", {})
|
||||
arguments = {}
|
||||
|
||||
for key in ["engineer_model", "engineer_api_key", "architect_model", "architect_api_key"]:
|
||||
cli_val = getattr(args, key, None)
|
||||
if cli_val:
|
||||
arguments[key] = cli_val[0]
|
||||
elif settings.get(key):
|
||||
arguments[key] = settings.get(key)
|
||||
|
||||
# Check keys only if running in local mode (not remote)
|
||||
if getattr(self.app.services, "mode", "local") == "local":
|
||||
if not arguments.get("engineer_api_key"):
|
||||
printer.error("Engineer API key not configured. The chat cannot start.")
|
||||
printer.info("Use 'connpy config --engineer-api-key <key>' to set it.")
|
||||
sys.exit(1)
|
||||
if not arguments.get("architect_api_key"):
|
||||
printer.warning("Architect API key not configured. Architect will be unavailable.")
|
||||
printer.info("Use 'connpy config --architect-api-key <key>' to enable it.")
|
||||
|
||||
# El resto de la interacción el CLI la maneja con el agente subyacente
|
||||
self.app.myai = self.app.services.ai
|
||||
self.ai_overrides = arguments
|
||||
|
||||
if args.ask:
|
||||
self.single_question(args, session_id)
|
||||
else:
|
||||
self.interactive_chat(args, session_id)
|
||||
|
||||
def single_question(self, args, session_id):
|
||||
query = " ".join(args.ask)
|
||||
with console.status("[ai_status]Agent is thinking and analyzing...") as status:
|
||||
result = self.app.myai.ask(query, status=status, debug=args.debug, session_id=session_id, trust=args.trust, **self.ai_overrides)
|
||||
|
||||
responder = result.get("responder", "engineer")
|
||||
border = "architect" if responder == "architect" else "engineer"
|
||||
title = "[architect][bold]Network Architect[/bold][/architect]" if responder == "architect" else "[engineer][bold]Network Engineer[/bold][/engineer]"
|
||||
|
||||
if not result.get("streamed"):
|
||||
mdprint(Panel(Markdown(result["response"]), title=title, border_style=border, expand=False))
|
||||
|
||||
if "usage" in result:
|
||||
u = result["usage"]
|
||||
console.print(f"[debug]Tokens: {u['total']} (Input: {u['input']}, Output: {u['output']})[/debug]")
|
||||
|
||||
def interactive_chat(self, args, session_id):
|
||||
history = None
|
||||
if session_id:
|
||||
session_data = self.app.myai.load_session_data(session_id)
|
||||
if session_data:
|
||||
history = session_data.get("history", [])
|
||||
mdprint(Rule(title=f"[header] Resuming Session: {session_data.get('title')} [/header]", style="border"))
|
||||
if history:
|
||||
mdprint(f"[debug]Analyzing {len(history)} previous messages...[/debug]\n")
|
||||
else:
|
||||
printer.error(f"Could not load session {session_id}. Starting clean.")
|
||||
|
||||
if not history:
|
||||
mdprint(Rule(style="engineer"))
|
||||
mdprint(Markdown("**Networking Expert Agent**: Hi! I'm your assistant. I can help you diagnose issues, run commands, and manage your nodes.\nType 'exit' to quit.\n"))
|
||||
mdprint(Rule(style="engineer"))
|
||||
|
||||
while True:
|
||||
try:
|
||||
user_query = Prompt.ask("[user_prompt]User[/user_prompt]")
|
||||
if not user_query.strip(): continue
|
||||
if user_query.lower() in ['exit', 'quit', 'bye']: break
|
||||
|
||||
with console.status("[ai_status]Agent is thinking...") as status:
|
||||
result = self.app.myai.ask(user_query, chat_history=history, status=status, debug=args.debug, trust=args.trust, **self.ai_overrides)
|
||||
|
||||
new_history = result.get("chat_history")
|
||||
if new_history is not None:
|
||||
history = new_history
|
||||
|
||||
responder = result.get("responder", "engineer")
|
||||
border = "architect" if responder == "architect" else "engineer"
|
||||
title = "[architect][bold]Network Architect[/bold][/architect]" if responder == "architect" else "[engineer][bold]Network Engineer[/bold][/engineer]"
|
||||
|
||||
if not result.get("streamed"):
|
||||
response_text = result.get("response", "")
|
||||
if response_text:
|
||||
mdprint(Panel(Markdown(response_text), title=title, border_style=border, expand=False))
|
||||
|
||||
if "usage" in result:
|
||||
u = result["usage"]
|
||||
console.print(f"[debug]Tokens: {u['total']} (Input: {u['input']}, Output: {u['output']})[/debug]")
|
||||
except (KeyboardInterrupt, EOFError):
|
||||
console.print("\n[dim]Session closed.[/dim]")
|
||||
break
|
||||
@@ -1,53 +0,0 @@
|
||||
import sys
|
||||
from .. import printer
|
||||
from ..services.exceptions import ConnpyError
|
||||
|
||||
class APIHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
|
||||
def dispatch(self, args):
|
||||
try:
|
||||
status = self.app.services.system.get_api_status()
|
||||
|
||||
if args.command == "stop":
|
||||
if not status["running"]:
|
||||
printer.warning("API does not seem to be running.")
|
||||
else:
|
||||
stopped = self.app.services.system.stop_api()
|
||||
if stopped:
|
||||
printer.success("API stopped successfully.")
|
||||
|
||||
elif args.command == "restart":
|
||||
port = args.data if args.data and isinstance(args.data, int) else None
|
||||
if status["running"]:
|
||||
printer.info(f"Stopping server with process ID {status['pid']}...")
|
||||
|
||||
# Service handles port preservation if port is None
|
||||
self.app.services.system.restart_api(port=port)
|
||||
|
||||
if status["running"]:
|
||||
printer.info(f"Server with process ID {status['pid']} stopped.")
|
||||
|
||||
# Re-fetch status to show the actual port used
|
||||
new_status = self.app.services.system.get_api_status()
|
||||
printer.success(f"API restarted on port {new_status.get('port', 'unknown')}.")
|
||||
|
||||
elif args.command == "start":
|
||||
if status["running"]:
|
||||
msg = f"Connpy server is already running (PID: {status['pid']}"
|
||||
if status.get("port"):
|
||||
msg += f", Port: {status['port']}"
|
||||
msg += ")."
|
||||
printer.warning(msg)
|
||||
else:
|
||||
port = args.data if args.data and isinstance(args.data, int) else 8048
|
||||
self.app.services.system.start_api(port=port)
|
||||
printer.success(f"API started on port {port}.")
|
||||
|
||||
elif args.command == "debug":
|
||||
port = args.data if args.data and isinstance(args.data, int) else 8048
|
||||
self.app.services.system.debug_api(port=port)
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
@@ -1,135 +0,0 @@
|
||||
import sys
|
||||
import yaml
|
||||
from .. import printer
|
||||
from ..services.exceptions import ConnpyError, InvalidConfigurationError
|
||||
from .help_text import get_instructions
|
||||
|
||||
class ConfigHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
|
||||
def dispatch(self, args):
|
||||
actions = {
|
||||
"completion": self.show_completion,
|
||||
"fzf_wrapper": self.show_fzf_wrapper,
|
||||
"case": self.set_case,
|
||||
"fzf": self.set_fzf,
|
||||
"idletime": self.set_idletime,
|
||||
"configfolder": self.set_configfolder,
|
||||
"theme": self.set_theme,
|
||||
"engineer_model": self.set_ai_config,
|
||||
"engineer_api_key": self.set_ai_config,
|
||||
"architect_model": self.set_ai_config,
|
||||
"architect_api_key": self.set_ai_config,
|
||||
"trusted_commands": self.set_ai_config,
|
||||
"service_mode": self.set_service_mode,
|
||||
"remote_host": self.set_remote_host,
|
||||
"sync_remote": self.set_sync_remote
|
||||
}
|
||||
handler = actions.get(getattr(args, "command", None))
|
||||
if handler:
|
||||
return handler(args)
|
||||
|
||||
# If no specific command was triggered, show current configuration
|
||||
return self.show_config(args)
|
||||
|
||||
def show_config(self, args):
|
||||
settings = self.app.services.config_svc.get_settings()
|
||||
yaml_str = yaml.dump(settings, sort_keys=False, default_flow_style=False)
|
||||
printer.data("Current Configuration", yaml_str)
|
||||
|
||||
def set_service_mode(self, args):
|
||||
new_mode = args.data[0]
|
||||
if new_mode == "remote":
|
||||
settings = self.app.services.config_svc.get_settings()
|
||||
if not settings.get("remote_host"):
|
||||
printer.error("Remote host must be configured before switching to remote mode")
|
||||
return
|
||||
|
||||
self.app.services.config_svc.update_setting("service_mode", new_mode)
|
||||
|
||||
# Immediate sync of fzf/text cache files for the new mode
|
||||
try:
|
||||
# 1. Clear old cache files to avoid discrepancies if fetch fails
|
||||
self.app.config._generate_nodes_cache(nodes=[], folders=[], profiles=[])
|
||||
|
||||
# 2. Re-initialize services for the new mode
|
||||
from ..services.provider import ServiceProvider
|
||||
settings = self.app.services.config_svc.get_settings()
|
||||
new_services = ServiceProvider(self.app.config, mode=new_mode, remote_host=settings.get("remote_host"))
|
||||
|
||||
# 3. Fetch data from new mode and generate cache
|
||||
nodes = new_services.nodes.list_nodes()
|
||||
folders = new_services.nodes.list_folders()
|
||||
profiles = new_services.profiles.list_profiles()
|
||||
new_services.nodes.generate_cache(nodes=nodes, folders=folders, profiles=profiles)
|
||||
|
||||
printer.success("Config saved")
|
||||
except Exception as e:
|
||||
printer.success("Config saved")
|
||||
printer.warning(f"Note: Could not synchronize fzf cache: {e}")
|
||||
|
||||
|
||||
def set_remote_host(self, args):
|
||||
self.app.services.config_svc.update_setting("remote_host", args.data[0])
|
||||
printer.success("Config saved")
|
||||
|
||||
def set_theme(self, args):
|
||||
try:
|
||||
valid_styles = self.app.services.config_svc.apply_theme_from_file(args.data[0])
|
||||
# Apply immediately to current session
|
||||
printer.apply_theme(valid_styles)
|
||||
printer.success(f"Theme '{args.data[0]}' applied and saved")
|
||||
except (ConnpyError, InvalidConfigurationError) as e:
|
||||
printer.error(str(e))
|
||||
|
||||
def show_fzf_wrapper(self, args):
|
||||
print(get_instructions("fzf_wrapper_" + args.data[0]))
|
||||
|
||||
def show_completion(self, args):
|
||||
print(get_instructions(args.data[0] + "completion"))
|
||||
|
||||
def set_case(self, args):
|
||||
val = (args.data[0].lower() == "true")
|
||||
self.app.services.config_svc.update_setting("case", val)
|
||||
self.app.case = val
|
||||
printer.success("Config saved")
|
||||
|
||||
def set_fzf(self, args):
|
||||
val = (args.data[0].lower() == "true")
|
||||
self.app.services.config_svc.update_setting("fzf", val)
|
||||
self.app.fzf = val
|
||||
printer.success("Config saved")
|
||||
|
||||
def set_idletime(self, args):
|
||||
try:
|
||||
val = max(0, int(args.data[0]))
|
||||
self.app.services.config_svc.update_setting("idletime", val)
|
||||
printer.success("Config saved")
|
||||
except ValueError:
|
||||
printer.error("Keepalive must be an integer.")
|
||||
|
||||
def set_configfolder(self, args):
|
||||
try:
|
||||
self.app.services.config_svc.set_config_folder(args.data[0])
|
||||
printer.success("Config saved")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def set_sync_remote(self, args):
|
||||
val = (args.data[0].lower() == "true")
|
||||
self.app.services.config_svc.update_setting("sync_remote", val)
|
||||
self.app.services.sync.sync_remote = val
|
||||
printer.success("Config saved")
|
||||
|
||||
def set_ai_config(self, args):
|
||||
try:
|
||||
settings = self.app.services.config_svc.get_settings()
|
||||
aiconfig = settings.get("ai", {})
|
||||
aiconfig[args.command] = args.data[0]
|
||||
self.app.services.config_svc.update_setting("ai", aiconfig)
|
||||
printer.success("Config saved")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
|
||||
@@ -1,77 +0,0 @@
|
||||
import sys
|
||||
import yaml
|
||||
from .. import printer
|
||||
from ..services.exceptions import ConnpyError
|
||||
|
||||
class ContextHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
self.service = self.app.services.context
|
||||
|
||||
def dispatch(self, args):
|
||||
try:
|
||||
if args.add:
|
||||
if len(args.add) < 2:
|
||||
printer.error("--add requires name and at least one regex")
|
||||
return
|
||||
self.service.add_context(args.add[0], args.add[1:])
|
||||
printer.success(f"Context '{args.add[0]}' added successfully.")
|
||||
|
||||
elif args.rm:
|
||||
if not args.context_name:
|
||||
printer.error("--rm requires a context name")
|
||||
return
|
||||
self.service.delete_context(args.context_name)
|
||||
printer.success(f"Context '{args.context_name}' deleted successfully.")
|
||||
|
||||
elif args.ls:
|
||||
contexts = self.service.list_contexts()
|
||||
for ctx in contexts:
|
||||
if ctx["active"]:
|
||||
printer.success(f"{ctx['name']} (active)")
|
||||
else:
|
||||
printer.custom(" ", ctx["name"])
|
||||
|
||||
elif args.set:
|
||||
if not args.context_name:
|
||||
printer.error("--set requires a context name")
|
||||
return
|
||||
self.service.set_active_context(args.context_name)
|
||||
printer.success(f"Context set to: {args.context_name}")
|
||||
|
||||
elif args.show:
|
||||
if not args.context_name:
|
||||
printer.error("--show requires a context name")
|
||||
return
|
||||
contexts = self.service.contexts
|
||||
if args.context_name not in contexts:
|
||||
printer.error(f"Context '{args.context_name}' does not exist")
|
||||
return
|
||||
yaml_output = yaml.dump(contexts[args.context_name], sort_keys=False, default_flow_style=False)
|
||||
printer.custom(args.context_name, "")
|
||||
print(yaml_output)
|
||||
|
||||
elif args.edit:
|
||||
if len(args.edit) < 2:
|
||||
printer.error("--edit requires name and at least one regex")
|
||||
return
|
||||
self.service.update_context(args.edit[0], args.edit[1:])
|
||||
printer.success(f"Context '{args.edit[0]}' modified successfully.")
|
||||
|
||||
else:
|
||||
# Default behavior if no flags: show list
|
||||
self.dispatch_ls(args)
|
||||
|
||||
except ValueError as e:
|
||||
printer.error(str(e))
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def dispatch_ls(self, args):
|
||||
contexts = self.service.list_contexts()
|
||||
for ctx in contexts:
|
||||
if ctx["active"]:
|
||||
printer.success(f"{ctx['name']} (active)")
|
||||
else:
|
||||
printer.custom(" ", ctx["name"])
|
||||
@@ -1,199 +0,0 @@
|
||||
import ast
|
||||
import inquirer
|
||||
from .validators import Validators
|
||||
|
||||
class Forms:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
self.validators = Validators(app)
|
||||
|
||||
def questions_edit(self):
|
||||
questions = []
|
||||
questions.append(inquirer.Confirm("host", message="Edit Hostname/IP?"))
|
||||
questions.append(inquirer.Confirm("protocol", message="Edit Protocol/app?"))
|
||||
questions.append(inquirer.Confirm("port", message="Edit Port?"))
|
||||
questions.append(inquirer.Confirm("options", message="Edit Options?"))
|
||||
questions.append(inquirer.Confirm("logs", message="Edit logging path/file?"))
|
||||
questions.append(inquirer.Confirm("tags", message="Edit tags?"))
|
||||
questions.append(inquirer.Confirm("jumphost", message="Edit jumphost?"))
|
||||
questions.append(inquirer.Confirm("user", message="Edit User?"))
|
||||
questions.append(inquirer.Confirm("password", message="Edit password?"))
|
||||
return inquirer.prompt(questions)
|
||||
|
||||
def questions_nodes(self, unique, uniques=None, edit=None):
|
||||
try:
|
||||
defaults = self.app.services.nodes.get_node_details(unique)
|
||||
if "tags" not in defaults:
|
||||
defaults["tags"] = ""
|
||||
if "jumphost" not in defaults:
|
||||
defaults["jumphost"] = ""
|
||||
except Exception:
|
||||
defaults = {"host": "", "protocol": "", "port": "", "user": "", "options": "", "logs": "", "tags": "", "password": "", "jumphost": ""}
|
||||
node = {}
|
||||
if edit is None:
|
||||
edit = {"host": True, "protocol": True, "port": True, "user": True, "password": True, "options": True, "logs": True, "tags": True, "jumphost": True}
|
||||
questions = []
|
||||
if edit["host"]:
|
||||
questions.append(inquirer.Text("host", message="Add Hostname or IP", validate=self.validators.host_validation, default=defaults["host"]))
|
||||
else:
|
||||
node["host"] = defaults["host"]
|
||||
if edit["protocol"]:
|
||||
questions.append(inquirer.Text("protocol", message="Select Protocol/app", validate=self.validators.protocol_validation, default=defaults["protocol"]))
|
||||
else:
|
||||
node["protocol"] = defaults["protocol"]
|
||||
if edit["port"]:
|
||||
questions.append(inquirer.Text("port", message="Select Port Number", validate=self.validators.port_validation, default=defaults["port"]))
|
||||
else:
|
||||
node["port"] = defaults["port"]
|
||||
if edit["options"]:
|
||||
questions.append(inquirer.Text("options", message="Pass extra options to protocol/app", validate=self.validators.default_validation, default=defaults["options"]))
|
||||
else:
|
||||
node["options"] = defaults["options"]
|
||||
if edit["logs"]:
|
||||
questions.append(inquirer.Text("logs", message="Pick logging path/file ", validate=self.validators.default_validation, default=defaults["logs"].replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
node["logs"] = defaults["logs"]
|
||||
if edit["tags"]:
|
||||
questions.append(inquirer.Text("tags", message="Add tags dictionary", validate=self.validators.tags_validation, default=str(defaults["tags"]).replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
node["tags"] = defaults["tags"]
|
||||
if edit["jumphost"]:
|
||||
questions.append(inquirer.Text("jumphost", message="Add Jumphost node", validate=self.validators.jumphost_validation, default=str(defaults["jumphost"]).replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
node["jumphost"] = defaults["jumphost"]
|
||||
if edit["user"]:
|
||||
questions.append(inquirer.Text("user", message="Pick username", validate=self.validators.default_validation, default=defaults["user"]))
|
||||
else:
|
||||
node["user"] = defaults["user"]
|
||||
if edit["password"]:
|
||||
questions.append(inquirer.List("password", message="Password: Use a local password, no password or a list of profiles to reference?", choices=["Local Password", "Profiles", "No Password"]))
|
||||
else:
|
||||
node["password"] = defaults["password"]
|
||||
|
||||
answer = inquirer.prompt(questions)
|
||||
if answer is None:
|
||||
return False
|
||||
|
||||
if "password" in answer:
|
||||
if answer["password"] == "Local Password":
|
||||
passq = [inquirer.Password("password", message="Set Password")]
|
||||
passa = inquirer.prompt(passq)
|
||||
if passa is None:
|
||||
return False
|
||||
answer["password"] = self.app.services.config_svc.encrypt_password(passa["password"])
|
||||
elif answer["password"] == "Profiles":
|
||||
passq = [(inquirer.Text("password", message="Set a @profile or a comma separated list of @profiles", validate=self.validators.pass_validation))]
|
||||
passa = inquirer.prompt(passq)
|
||||
if passa is None:
|
||||
return False
|
||||
answer["password"] = passa["password"].split(",")
|
||||
elif answer["password"] == "No Password":
|
||||
answer["password"] = ""
|
||||
|
||||
if "tags" in answer and not answer["tags"].startswith("@") and answer["tags"]:
|
||||
answer["tags"] = ast.literal_eval(answer["tags"])
|
||||
|
||||
result = {**uniques, **answer, **node}
|
||||
result["type"] = "connection"
|
||||
return result
|
||||
|
||||
def questions_profiles(self, unique, edit=None):
|
||||
try:
|
||||
defaults = self.app.services.profiles.get_profile(unique, resolve=False)
|
||||
if "tags" not in defaults:
|
||||
defaults["tags"] = ""
|
||||
if "jumphost" not in defaults:
|
||||
defaults["jumphost"] = ""
|
||||
except Exception:
|
||||
defaults = {"host": "", "protocol": "", "port": "", "user": "", "options": "", "logs": "", "tags": "", "jumphost": ""}
|
||||
profile = {}
|
||||
if edit is None:
|
||||
edit = {"host": True, "protocol": True, "port": True, "user": True, "password": True, "options": True, "logs": True, "tags": True, "jumphost": True}
|
||||
questions = []
|
||||
if edit["host"]:
|
||||
questions.append(inquirer.Text("host", message="Add Hostname or IP", default=defaults["host"]))
|
||||
else:
|
||||
profile["host"] = defaults["host"]
|
||||
if edit["protocol"]:
|
||||
questions.append(inquirer.Text("protocol", message="Select Protocol/app", validate=self.validators.profile_protocol_validation, default=defaults["protocol"]))
|
||||
else:
|
||||
profile["protocol"] = defaults["protocol"]
|
||||
if edit["port"]:
|
||||
questions.append(inquirer.Text("port", message="Select Port Number", validate=self.validators.profile_port_validation, default=defaults["port"]))
|
||||
else:
|
||||
profile["port"] = defaults["port"]
|
||||
if edit["options"]:
|
||||
questions.append(inquirer.Text("options", message="Pass extra options to protocol/app", default=defaults["options"]))
|
||||
else:
|
||||
profile["options"] = defaults["options"]
|
||||
if edit["logs"]:
|
||||
questions.append(inquirer.Text("logs", message="Pick logging path/file ", default=defaults["logs"].replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
profile["logs"] = defaults["logs"]
|
||||
if edit["tags"]:
|
||||
questions.append(inquirer.Text("tags", message="Add tags dictionary", validate=self.validators.profile_tags_validation, default=str(defaults["tags"]).replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
profile["tags"] = defaults["tags"]
|
||||
if edit["jumphost"]:
|
||||
questions.append(inquirer.Text("jumphost", message="Add Jumphost node", validate=self.validators.profile_jumphost_validation, default=str(defaults["jumphost"]).replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
profile["jumphost"] = defaults["jumphost"]
|
||||
if edit["user"]:
|
||||
questions.append(inquirer.Text("user", message="Pick username", default=defaults["user"]))
|
||||
else:
|
||||
profile["user"] = defaults["user"]
|
||||
if edit["password"]:
|
||||
questions.append(inquirer.Password("password", message="Set Password"))
|
||||
else:
|
||||
profile["password"] = defaults["password"]
|
||||
|
||||
answer = inquirer.prompt(questions)
|
||||
if answer is None:
|
||||
return False
|
||||
|
||||
if "password" in answer:
|
||||
if answer["password"] != "":
|
||||
answer["password"] = self.app.services.config_svc.encrypt_password(answer["password"])
|
||||
|
||||
if "tags" in answer and answer["tags"]:
|
||||
answer["tags"] = ast.literal_eval(answer["tags"])
|
||||
|
||||
result = {**answer, **profile}
|
||||
result["id"] = unique
|
||||
return result
|
||||
|
||||
def questions_bulk(self, nodes="", hosts=""):
|
||||
questions = []
|
||||
questions.append(inquirer.Text("ids", message="add a comma separated list of nodes to add", default=nodes, validate=self.validators.bulk_node_validation))
|
||||
questions.append(inquirer.Text("location", message="Add a @folder, @subfolder@folder or leave empty", validate=self.validators.bulk_folder_validation))
|
||||
questions.append(inquirer.Text("host", message="Add comma separated list of Hostnames or IPs", default=hosts, validate=self.validators.bulk_host_validation))
|
||||
questions.append(inquirer.Text("protocol", message="Select Protocol/app", validate=self.validators.protocol_validation))
|
||||
questions.append(inquirer.Text("port", message="Select Port Number", validate=self.validators.port_validation))
|
||||
questions.append(inquirer.Text("options", message="Pass extra options to protocol/app", validate=self.validators.default_validation))
|
||||
questions.append(inquirer.Text("logs", message="Pick logging path/file ", validate=self.validators.default_validation))
|
||||
questions.append(inquirer.Text("tags", message="Add tags dictionary", validate=self.validators.tags_validation))
|
||||
questions.append(inquirer.Text("jumphost", message="Add Jumphost node", validate=self.validators.jumphost_validation))
|
||||
questions.append(inquirer.Text("user", message="Pick username", validate=self.validators.default_validation))
|
||||
questions.append(inquirer.List("password", message="Password: Use a local password, no password or a list of profiles to reference?", choices=["Local Password", "Profiles", "No Password"]))
|
||||
|
||||
answer = inquirer.prompt(questions)
|
||||
if answer is None:
|
||||
return False
|
||||
|
||||
if "password" in answer:
|
||||
if answer["password"] == "Local Password":
|
||||
passq = [inquirer.Password("password", message="Set Password")]
|
||||
passa = inquirer.prompt(passq)
|
||||
answer["password"] = self.app.services.config_svc.encrypt_password(passa["password"])
|
||||
elif answer["password"] == "Profiles":
|
||||
passq = [(inquirer.Text("password", message="Set a @profile or a comma separated list of @profiles", validate=self.validators.pass_validation))]
|
||||
passa = inquirer.prompt(passq)
|
||||
answer["password"] = passa["password"].split(",")
|
||||
elif answer["password"] == "No Password":
|
||||
answer["password"] = ""
|
||||
|
||||
answer["type"] = "connection"
|
||||
if "tags" in answer and not answer["tags"].startswith("@") and answer["tags"]:
|
||||
answer["tags"] = ast.literal_eval(answer["tags"])
|
||||
|
||||
return answer
|
||||
@@ -1,215 +0,0 @@
|
||||
import os
|
||||
|
||||
def get_help(type, parsers=None):
|
||||
if type == "export":
|
||||
return "Export /path/to/file.yml \[@subfolder1]\[@folder1] \[@subfolderN]\[@folderN]"
|
||||
if type == "import":
|
||||
return "Import /path/to/file.yml"
|
||||
if type == "node":
|
||||
return "node\[@subfolder]\[@folder]\nConnect to specific node or show all matching nodes\n\[@subfolder]\[@folder]\nShow all available connections globally or in specified path"
|
||||
if type == "usage":
|
||||
commands = []
|
||||
for subcommand, subparser in parsers.choices.items():
|
||||
if subparser.description != None:
|
||||
commands.append(subcommand)
|
||||
commands = ",".join(commands)
|
||||
usage_help = f"connpy [-h] [--add | --del | --mod | --show | --debug] [node|folder] [--sftp]\n connpy {{{commands}}} ..."
|
||||
return usage_help
|
||||
return get_instructions(type)
|
||||
|
||||
def get_instructions(type="add"):
|
||||
if type == "add":
|
||||
return """
|
||||
Welcome to Connpy node Addition Wizard!
|
||||
|
||||
Here are some important instructions and tips for configuring your new node:
|
||||
|
||||
1. **Profiles**:
|
||||
- You can use the configured settings in a profile using `@profilename`.
|
||||
|
||||
2. **Available Protocols and Apps**:
|
||||
- ssh
|
||||
- telnet
|
||||
- kubectl (`kubectl exec`)
|
||||
- docker (`docker exec`)
|
||||
- ssm (`aws ssm start-session`)
|
||||
|
||||
3. **Optional Values**:
|
||||
- You can leave any value empty except for the hostname/IP.
|
||||
|
||||
4. **Passwords**:
|
||||
- You can pass one or more passwords using comma-separated `@profiles`.
|
||||
|
||||
5. **Logging**:
|
||||
- You can use the following variables in the logging file name:
|
||||
- `${id}`
|
||||
- `${unique}`
|
||||
- `${host}`
|
||||
- `${port}`
|
||||
- `${user}`
|
||||
- `${protocol}`
|
||||
|
||||
6. **Well-Known Tags**:
|
||||
- `os`: Identified by AI to generate commands based on the operating system.
|
||||
- `screen_length_command`: Used by automation to avoid pagination on different devices (e.g., `terminal length 0` for Cisco devices).
|
||||
- `prompt`: Replaces default app prompt to identify the end of output or where the user can start inputting commands.
|
||||
- `kube_command`: Replaces the default command (`/bin/bash`) for `kubectl exec`.
|
||||
- `docker_command`: Replaces the default command for `docker exec`.
|
||||
- `region`: AWS Region used for `aws ssm start-session`.
|
||||
- `profile`: AWS Profile used for `aws ssm start-session`.
|
||||
- `ssh_options`: Additional SSH options injected when an SSM node is used as a jumphost (e.g., `-i ~/.ssh/key.pem`).
|
||||
- `nc_command`: Replaces the default `nc` command used when bridging connections through Docker or Kubernetes (e.g., `ip netns exec global-vrf nc`).
|
||||
"""
|
||||
if type == "bashcompletion":
|
||||
return '''
|
||||
# Bash completion for connpy
|
||||
# Run: eval "$(connpy config --completion bash)"
|
||||
# Or add it to your .bashrc
|
||||
|
||||
_connpy_autocomplete()
|
||||
{
|
||||
local strings
|
||||
strings=$(python3 -m connpy.completion bash ${#COMP_WORDS[@]} "${COMP_WORDS[@]}")
|
||||
|
||||
local IFS=$'\\t'
|
||||
COMPREPLY=( $(compgen -W "$strings" -- "${COMP_WORDS[$COMP_CWORD]}") )
|
||||
}
|
||||
complete -o nosort -F _connpy_autocomplete conn
|
||||
complete -o nosort -F _connpy_autocomplete connpy
|
||||
'''
|
||||
if type == "zshcompletion":
|
||||
return '''
|
||||
# Zsh completion for connpy
|
||||
# Run: eval "$(connpy config --completion zsh)"
|
||||
# Or add it to your .zshrc
|
||||
# Make sure compinit is loaded
|
||||
|
||||
autoload -U compinit && compinit
|
||||
_connpy_autocomplete()
|
||||
{
|
||||
local COMP_WORDS num strings
|
||||
COMP_WORDS=( $words )
|
||||
num=${#COMP_WORDS[@]}
|
||||
if [[ $words =~ '.* $' ]]; then
|
||||
num=$(($num + 1))
|
||||
fi
|
||||
strings=$(python3 -m connpy.completion zsh ${num} ${COMP_WORDS[@]})
|
||||
|
||||
local IFS=$'\\t'
|
||||
compadd "$@" -- ${=strings}
|
||||
}
|
||||
compdef _connpy_autocomplete conn
|
||||
compdef _connpy_autocomplete connpy
|
||||
'''
|
||||
if type == "fzf_wrapper_bash":
|
||||
return '''\n#Here starts bash 0ms fzf wrapper for connpy
|
||||
connpy() {
|
||||
if [ $# -eq 0 ]; then
|
||||
local selected
|
||||
local configdir=$(cat ~/.config/conn/.folder 2>/dev/null || echo ~/.config/conn)
|
||||
if [ -s "$configdir/.fzf_nodes_cache.txt" ]; then
|
||||
selected=$(cat "$configdir/.fzf_nodes_cache.txt" | fzf-tmux -i -d 25%)
|
||||
else
|
||||
command connpy
|
||||
return
|
||||
fi
|
||||
if [ -n "$selected" ]; then
|
||||
command connpy "$selected"
|
||||
fi
|
||||
else
|
||||
command connpy "$@"
|
||||
fi
|
||||
}
|
||||
alias c="connpy"
|
||||
#Here ends bash 0ms fzf wrapper for connpy
|
||||
'''
|
||||
if type == "fzf_wrapper_zsh":
|
||||
return '''\n#Here starts zsh 0ms fzf wrapper for connpy
|
||||
connpy() {
|
||||
if [ $# -eq 0 ]; then
|
||||
local selected
|
||||
local configdir=$(cat ~/.config/conn/.folder 2>/dev/null || echo ~/.config/conn)
|
||||
if [ -s "$configdir/.fzf_nodes_cache.txt" ]; then
|
||||
selected=$(cat "$configdir/.fzf_nodes_cache.txt" | fzf-tmux -i -d 25%)
|
||||
else
|
||||
command connpy
|
||||
return
|
||||
fi
|
||||
if [ -n "$selected" ]; then
|
||||
command connpy "$selected"
|
||||
fi
|
||||
else
|
||||
command connpy "$@"
|
||||
fi
|
||||
}
|
||||
alias c="connpy"
|
||||
#Here ends zsh 0ms fzf wrapper for connpy
|
||||
'''
|
||||
if type == "run":
|
||||
return "node[@subfolder][@folder] commmand to run\nRun the specific command on the node and print output\n/path/to/file.yaml\nUse a yaml file to run an automation script"
|
||||
if type == "generate":
|
||||
return r'''---
|
||||
tasks:
|
||||
- name: "Config"
|
||||
|
||||
action: 'run' #Action can be test or run. Mandatory
|
||||
|
||||
nodes: #List of nodes to work on. Mandatory
|
||||
- 'router1@office' #You can add specific nodes
|
||||
- '@aws' #entire folders or subfolders
|
||||
- 'router.*@office' #or use regex to filter inside a folder
|
||||
|
||||
commands: #List of commands to send, use {name} to pass variables
|
||||
- 'term len 0'
|
||||
- 'conf t'
|
||||
- 'interface {if}'
|
||||
- 'ip address 10.100.100.{id} 255.255.255.255'
|
||||
- '{commit}'
|
||||
- 'end'
|
||||
|
||||
variables: #Variables to use on commands and expected. Optional
|
||||
__global__: #Global variables to use on all nodes, fallback if missing in the node.
|
||||
commit: ''
|
||||
if: 'loopback100'
|
||||
router1@office:
|
||||
id: 1
|
||||
router2@office:
|
||||
id: 2
|
||||
commit: 'commit'
|
||||
router3@office:
|
||||
id: 3
|
||||
vrouter1@aws:
|
||||
id: 4
|
||||
vrouterN@aws:
|
||||
id: 5
|
||||
|
||||
output: /home/user/logs #Type of output, if null you only get Connection and test result. Choices are: null,stdout,/path/to/folder. Folder path works on both 'run' and 'test' actions.
|
||||
|
||||
options:
|
||||
prompt: r'>$|#$|\$$|>.$|#.$|\$.$' #Optional prompt to check on your devices, default should work on most devices.
|
||||
parallel: 10 #Optional number of nodes to run commands on parallel. Default 10.
|
||||
timeout: 20 #Optional time to wait in seconds for prompt, expected or EOF. Default 20.
|
||||
|
||||
- name: "TestConfig"
|
||||
action: 'test'
|
||||
nodes:
|
||||
- 'router1@office'
|
||||
- '@aws'
|
||||
commands:
|
||||
- 'ping 10.100.100.{id}'
|
||||
expected: '!' #Expected text to find when running test action. Mandatory for 'test'
|
||||
variables:
|
||||
router1@office:
|
||||
id: 1
|
||||
router2@office:
|
||||
id: 2
|
||||
commit: 'commit'
|
||||
router3@office:
|
||||
id: 3
|
||||
vrouter1@aws:
|
||||
id: 4
|
||||
vrouterN@aws:
|
||||
id: 5
|
||||
output: null
|
||||
...'''
|
||||
return ""
|
||||
@@ -1,80 +0,0 @@
|
||||
import os
|
||||
import inquirer
|
||||
try:
|
||||
from pyfzf.pyfzf import FzfPrompt
|
||||
except ImportError:
|
||||
FzfPrompt = None
|
||||
|
||||
def get_config_dir():
|
||||
home = os.path.expanduser("~")
|
||||
defaultdir = os.path.join(home, '.config/conn')
|
||||
pathfile = os.path.join(defaultdir, '.folder')
|
||||
try:
|
||||
with open(pathfile, "r") as f:
|
||||
return f.read().strip()
|
||||
except:
|
||||
return defaultdir
|
||||
|
||||
def nodes_completer(prefix, parsed_args, **kwargs):
|
||||
configdir = get_config_dir()
|
||||
cache_file = os.path.join(configdir, '.fzf_nodes_cache.txt')
|
||||
if os.path.exists(cache_file):
|
||||
with open(cache_file, "r") as f:
|
||||
return [line.strip() for line in f if line.startswith(prefix)]
|
||||
return []
|
||||
|
||||
def folders_completer(prefix, parsed_args, **kwargs):
|
||||
configdir = get_config_dir()
|
||||
cache_file = os.path.join(configdir, '.folders_cache.txt')
|
||||
if os.path.exists(cache_file):
|
||||
with open(cache_file, "r") as f:
|
||||
return [line.strip() for line in f if line.startswith(prefix)]
|
||||
return []
|
||||
|
||||
def profiles_completer(prefix, parsed_args, **kwargs):
|
||||
configdir = get_config_dir()
|
||||
cache_file = os.path.join(configdir, '.profiles_cache.txt')
|
||||
if os.path.exists(cache_file):
|
||||
with open(cache_file, "r") as f:
|
||||
return [line.strip() for line in f if line.startswith(prefix)]
|
||||
return []
|
||||
|
||||
def choose(app, list_, name, action):
|
||||
# Generates an inquirer list to pick
|
||||
# Safeguard: Never prompt if running in autocomplete shell
|
||||
if os.environ.get("_ARGCOMPLETE") or os.environ.get("COMP_LINE"):
|
||||
return None
|
||||
|
||||
if FzfPrompt and app.fzf and os.environ.get("_ARGCOMPLETE") is None and os.environ.get("COMP_LINE") is None:
|
||||
fzf_prompt = FzfPrompt(executable_path="fzf-tmux")
|
||||
if not app.case:
|
||||
fzf_prompt = FzfPrompt(executable_path="fzf-tmux -i")
|
||||
answer = fzf_prompt.prompt(list_, fzf_options="-d 25%")
|
||||
if len(answer) == 0:
|
||||
return None
|
||||
else:
|
||||
return answer[0]
|
||||
else:
|
||||
questions = [inquirer.List(name, message="Pick {} to {}:".format(name,action), choices=list_, carousel=True)]
|
||||
answer = inquirer.prompt(questions)
|
||||
if answer == None:
|
||||
return None
|
||||
else:
|
||||
return answer[name]
|
||||
|
||||
def toplevel_completer(prefix, parsed_args, **kwargs):
|
||||
commands = ["node", "profile", "move", "mv", "copy", "cp", "list", "ls", "bulk", "export", "import", "ai", "run", "api", "context", "plugin", "config", "sync"]
|
||||
|
||||
configdir = get_config_dir()
|
||||
cache_file = os.path.join(configdir, '.fzf_nodes_cache.txt')
|
||||
nodes = []
|
||||
if os.path.exists(cache_file):
|
||||
with open(cache_file, "r") as f:
|
||||
nodes = [line.strip() for line in f if line.startswith(prefix)]
|
||||
|
||||
cache_folders = os.path.join(configdir, '.folders_cache.txt')
|
||||
if os.path.exists(cache_folders):
|
||||
with open(cache_folders, "r") as f:
|
||||
nodes += [line.strip() for line in f if line.startswith(prefix)]
|
||||
|
||||
return [c for c in commands + nodes if c.startswith(prefix)]
|
||||
@@ -1,85 +0,0 @@
|
||||
import os
|
||||
import sys
|
||||
import inquirer
|
||||
from .. import printer
|
||||
from ..services.exceptions import ConnpyError
|
||||
from .forms import Forms
|
||||
|
||||
class ImportExportHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
self.forms = Forms(app)
|
||||
|
||||
def dispatch_import(self, args):
|
||||
file_path = args.data[0]
|
||||
try:
|
||||
printer.warning("This could overwrite your current configuration!")
|
||||
question = [inquirer.Confirm("import", message=f"Are you sure you want to import {file_path}?")]
|
||||
confirm = inquirer.prompt(question)
|
||||
if confirm == None or not confirm["import"]:
|
||||
sys.exit(7)
|
||||
|
||||
self.app.services.import_export.import_from_file(file_path)
|
||||
printer.success(f"File {file_path} imported successfully.")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def dispatch_export(self, args):
|
||||
file_path = args.data[0]
|
||||
folders = args.data[1:] if len(args.data) > 1 else None
|
||||
try:
|
||||
self.app.services.import_export.export_to_file(file_path, folders=folders)
|
||||
printer.success(f"File {file_path} generated successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
sys.exit()
|
||||
|
||||
def bulk(self, args):
|
||||
if args.file and os.path.isfile(args.file[0]):
|
||||
with open(args.file[0], 'r') as f:
|
||||
lines = f.readlines()
|
||||
|
||||
# Expecting exactly 2 lines
|
||||
if len(lines) < 2:
|
||||
printer.error("The file must contain at least two lines: one for nodes, one for hosts.")
|
||||
sys.exit(11)
|
||||
|
||||
nodes = lines[0].strip()
|
||||
hosts = lines[1].strip()
|
||||
newnodes = self.forms.questions_bulk(nodes, hosts)
|
||||
else:
|
||||
newnodes = self.forms.questions_bulk()
|
||||
|
||||
if newnodes == False:
|
||||
sys.exit(7)
|
||||
|
||||
if not self.app.case:
|
||||
newnodes["location"] = newnodes["location"].lower()
|
||||
newnodes["ids"] = newnodes["ids"].lower()
|
||||
|
||||
# Handle the case where location might be a file reference (e.g. from a prompt)
|
||||
location = newnodes["location"]
|
||||
if location.startswith("@") and "/" in location:
|
||||
# Extract the actual @folder part (e.g. @testall from @testall/.folders_cache.txt)
|
||||
location = location.split("/")[0]
|
||||
newnodes["location"] = location
|
||||
|
||||
ids = newnodes["ids"].split(",")
|
||||
# Append location to each id for proper folder assignment
|
||||
location = newnodes["location"]
|
||||
if location:
|
||||
ids = [f"{i}{location}" for i in ids]
|
||||
|
||||
hosts = newnodes["host"].split(",")
|
||||
|
||||
try:
|
||||
count = self.app.services.nodes.bulk_add(ids, hosts, newnodes)
|
||||
if count > 0:
|
||||
printer.success(f"Successfully added {count} nodes.")
|
||||
else:
|
||||
printer.info("0 nodes added")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
@@ -1,234 +0,0 @@
|
||||
import sys
|
||||
import yaml
|
||||
import inquirer
|
||||
from rich.markdown import Markdown
|
||||
|
||||
from .. import printer
|
||||
from ..services.exceptions import ConnpyError, InvalidConfigurationError
|
||||
from .helpers import choose
|
||||
from .forms import Forms
|
||||
from .help_text import get_instructions
|
||||
|
||||
class NodeHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
self.forms = Forms(app)
|
||||
|
||||
def dispatch(self, args):
|
||||
if not self.app.case and args.data != None:
|
||||
args.data = args.data.lower()
|
||||
actions = {"version": self.version, "connect": self.connect, "add": self.add, "del": self.delete, "mod": self.modify, "show": self.show}
|
||||
return actions.get(args.action)(args)
|
||||
|
||||
def version(self, args):
|
||||
from .._version import __version__
|
||||
printer.info(f"Connpy {__version__}")
|
||||
|
||||
def connect(self, args):
|
||||
if args.data == None:
|
||||
try:
|
||||
matches = self.app.services.nodes.list_nodes()
|
||||
except Exception as e:
|
||||
printer.error(f"Failed to list nodes: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
if len(matches) == 0:
|
||||
printer.warning("There are no nodes created")
|
||||
printer.info("try: connpy --help")
|
||||
sys.exit(9)
|
||||
else:
|
||||
try:
|
||||
matches = self.app.services.nodes.list_nodes(args.data)
|
||||
except Exception:
|
||||
matches = []
|
||||
|
||||
if len(matches) == 0:
|
||||
printer.error(f"{args.data} not found")
|
||||
sys.exit(2)
|
||||
elif len(matches) > 1:
|
||||
matches[0] = choose(self.app, matches, "node", "connect")
|
||||
|
||||
if matches[0] == None:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
self.app.services.nodes.connect_node(
|
||||
matches[0],
|
||||
sftp=args.sftp,
|
||||
debug=args.debug,
|
||||
logger=self.app._service_logger
|
||||
)
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def delete(self, args):
|
||||
if args.data == None:
|
||||
printer.error("Missing argument node")
|
||||
sys.exit(3)
|
||||
|
||||
is_folder = args.data.startswith("@")
|
||||
try:
|
||||
if is_folder:
|
||||
matches = self.app.services.nodes.list_folders(args.data)
|
||||
else:
|
||||
matches = self.app.services.nodes.list_nodes(args.data)
|
||||
except Exception:
|
||||
matches = []
|
||||
|
||||
if len(matches) == 0:
|
||||
printer.error(f"{args.data} not found")
|
||||
sys.exit(2)
|
||||
|
||||
printer.info(f"Removing: {matches}")
|
||||
question = [inquirer.Confirm("delete", message="Are you sure you want to continue?")]
|
||||
confirm = inquirer.prompt(question)
|
||||
if confirm == None or not confirm["delete"]:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
for item in matches:
|
||||
self.app.services.nodes.delete_node(item, is_folder=is_folder)
|
||||
|
||||
if len(matches) == 1:
|
||||
printer.success(f"{matches[0]} deleted successfully")
|
||||
else:
|
||||
printer.success(f"{len(matches)} items deleted successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def add(self, args):
|
||||
try:
|
||||
args.data = self.app._type_node(args.data)
|
||||
except ValueError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(3)
|
||||
|
||||
if args.data == None:
|
||||
printer.error("Missing argument node")
|
||||
sys.exit(3)
|
||||
|
||||
is_folder = args.data.startswith("@")
|
||||
try:
|
||||
if is_folder:
|
||||
uniques = self.app.services.nodes.explode_unique(args.data)
|
||||
if not uniques:
|
||||
raise InvalidConfigurationError(f"Invalid folder {args.data}")
|
||||
self.app.services.nodes.add_node(args.data, {}, is_folder=True)
|
||||
printer.success(f"{args.data} added successfully")
|
||||
else:
|
||||
if args.data in self.app.nodes_list:
|
||||
printer.error(f"Node '{args.data}' already exists.")
|
||||
sys.exit(1)
|
||||
uniques = self.app.services.nodes.explode_unique(args.data)
|
||||
|
||||
# Fast fail if parent folder does not exist
|
||||
self.app.services.nodes.validate_parent_folder(args.data)
|
||||
|
||||
printer.console.print(Markdown(get_instructions()))
|
||||
|
||||
new_node_data = self.forms.questions_nodes(args.data, uniques)
|
||||
if not new_node_data:
|
||||
sys.exit(7)
|
||||
self.app.services.nodes.add_node(args.data, new_node_data)
|
||||
printer.success(f"{args.data} added successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def show(self, args):
|
||||
if args.data == None:
|
||||
printer.error("Missing argument node")
|
||||
sys.exit(3)
|
||||
|
||||
try:
|
||||
matches = self.app.services.nodes.list_nodes(args.data)
|
||||
except Exception:
|
||||
matches = []
|
||||
|
||||
if len(matches) == 0:
|
||||
printer.error(f"{args.data} not found")
|
||||
sys.exit(2)
|
||||
elif len(matches) > 1:
|
||||
matches[0] = choose(self.app, matches, "node", "show")
|
||||
|
||||
if matches[0] == None:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
node = self.app.services.nodes.get_node_details(matches[0])
|
||||
yaml_output = yaml.dump(node, sort_keys=False, default_flow_style=False)
|
||||
printer.data(matches[0], yaml_output)
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def modify(self, args):
|
||||
if args.data == None:
|
||||
printer.error("Missing argument node")
|
||||
sys.exit(3)
|
||||
|
||||
try:
|
||||
matches = self.app.services.nodes.list_nodes(args.data)
|
||||
except Exception:
|
||||
matches = []
|
||||
|
||||
if len(matches) == 0:
|
||||
printer.error(f"No connection found with filter: {args.data}")
|
||||
sys.exit(2)
|
||||
|
||||
unique = matches[0] if len(matches) == 1 else None
|
||||
uniques = self.app.services.nodes.explode_unique(unique) if unique else {"id": None, "folder": None}
|
||||
|
||||
printer.info(f"Editing: {matches}")
|
||||
node_details = {}
|
||||
for i in matches:
|
||||
node_details[i] = self.app.services.nodes.get_node_details(i)
|
||||
|
||||
edits = self.forms.questions_edit()
|
||||
if edits == None:
|
||||
sys.exit(7)
|
||||
|
||||
# Use first match as base for defaults if multiple matches exist
|
||||
base_unique = matches[0]
|
||||
base_uniques = self.app.services.nodes.explode_unique(base_unique)
|
||||
updatenode = self.forms.questions_nodes(base_unique, base_uniques, edit=edits)
|
||||
if not updatenode:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
if len(matches) == 1:
|
||||
# Comparison for "Nothing to do"
|
||||
current = node_details[matches[0]].copy()
|
||||
current.update(uniques)
|
||||
current["type"] = "connection"
|
||||
if sorted(updatenode.items()) == sorted(current.items()):
|
||||
printer.info("Nothing to do here")
|
||||
return
|
||||
self.app.services.nodes.update_node(matches[0], updatenode)
|
||||
printer.success(f"{args.data} edited successfully")
|
||||
else:
|
||||
editcount = 0
|
||||
for k in matches:
|
||||
updated_item = self.app.services.nodes.explode_unique(k)
|
||||
updated_item["type"] = "connection"
|
||||
updated_item.update(node_details[k])
|
||||
|
||||
this_item_changed = False
|
||||
for key, should_edit in edits.items():
|
||||
if should_edit:
|
||||
this_item_changed = True
|
||||
updated_item[key] = updatenode[key]
|
||||
|
||||
if this_item_changed:
|
||||
editcount += 1
|
||||
self.app.services.nodes.update_node(k, updated_item)
|
||||
|
||||
if editcount == 0:
|
||||
printer.info("Nothing to do here")
|
||||
else:
|
||||
printer.success(f"{matches} edited successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
@@ -1,150 +0,0 @@
|
||||
import sys
|
||||
import yaml
|
||||
from .. import printer
|
||||
from ..services.exceptions import ConnpyError
|
||||
|
||||
class PluginHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
|
||||
def dispatch(self, args):
|
||||
try:
|
||||
# We determine the target PluginService/PluginStub based on standard 'mode'
|
||||
# But wait, local plugins should go to app.services._init_local version
|
||||
# Or we can just use the provided app.services.plugins and pass the appropriate grpc calls if needed.
|
||||
|
||||
is_remote = getattr(args, "remote", False)
|
||||
if is_remote and self.app.services.mode != "remote":
|
||||
printer.error("Cannot use --remote flag when not running in remote mode.")
|
||||
return
|
||||
|
||||
if args.add:
|
||||
self.app.services.plugins.add_plugin(args.add[0], args.add[1])
|
||||
printer.success(f"Plugin {args.add[0]} added successfully{' remotely' if is_remote else ''}.")
|
||||
elif args.update:
|
||||
self.app.services.plugins.add_plugin(args.update[0], args.update[1], update=True)
|
||||
printer.success(f"Plugin {args.update[0]} updated successfully{' remotely' if is_remote else ''}.")
|
||||
elif args.delete:
|
||||
self.app.services.plugins.delete_plugin(args.delete[0])
|
||||
printer.success(f"Plugin {args.delete[0]} deleted successfully{' remotely' if is_remote else ''}.")
|
||||
elif args.enable:
|
||||
name = args.enable[0]
|
||||
if is_remote:
|
||||
self.app.plugins.preferences[name] = "remote"
|
||||
else:
|
||||
if name in self.app.plugins.preferences:
|
||||
del self.app.plugins.preferences[name]
|
||||
|
||||
self.app.plugins._save_preferences(self.app.services.config_svc.get_default_dir())
|
||||
|
||||
# Always try to enable it locally (remove .bkp) if it exists
|
||||
# regardless of mode, to keep files consistent with "enabled" state
|
||||
try:
|
||||
# We use a local service instance to ensure we touch local files
|
||||
from ..services.plugin_service import PluginService
|
||||
local_svc = PluginService(self.app.services.config)
|
||||
local_svc.enable_plugin(name)
|
||||
except Exception:
|
||||
pass # Ignore if not found locally or already enabled
|
||||
|
||||
if is_remote and self.app.services.mode == "remote":
|
||||
self.app.services.plugins.enable_plugin(name)
|
||||
|
||||
printer.success(f"Plugin {name} enabled successfully{' remotely' if is_remote else ' locally'}.")
|
||||
elif args.disable:
|
||||
name = args.disable[0]
|
||||
success = False
|
||||
if is_remote:
|
||||
if self.app.services.mode == "remote":
|
||||
self.app.services.plugins.disable_plugin(name)
|
||||
success = True
|
||||
else:
|
||||
# Disable locally
|
||||
from ..services.plugin_service import PluginService
|
||||
local_svc = PluginService(self.app.services.config)
|
||||
try:
|
||||
if local_svc.disable_plugin(name):
|
||||
success = True
|
||||
except Exception as e:
|
||||
printer.warning(f"Could not disable local plugin: {e}")
|
||||
|
||||
if success:
|
||||
printer.success(f"Plugin {name} disabled successfully{' remotely' if is_remote else ' locally'}.")
|
||||
|
||||
# If any remote operation was performed, trigger a sync to update local cache immediately
|
||||
if is_remote and self.app.services.mode == "remote":
|
||||
try:
|
||||
import os
|
||||
cache_dir = os.path.join(self.app.services.config_svc.get_default_dir(), "remote_plugins")
|
||||
# We use a dummy subparser choice check bypass by passing force_sync=True
|
||||
# or just letting the hasher handle it.
|
||||
self.app.plugins._import_remote_plugins_to_argparse(
|
||||
self.app.services.plugins,
|
||||
self.app.subparsers, # We'll need to make sure this is available
|
||||
cache_dir,
|
||||
force_sync=True
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
elif getattr(args, "sync", False):
|
||||
# The actual sync logic is performed in connapp.py during init
|
||||
# if the --sync flag is detected in sys.argv
|
||||
printer.success("Remote plugins synchronized successfully.")
|
||||
elif args.list:
|
||||
# We need to fetch both local and remote if in remote mode
|
||||
local_plugins = {}
|
||||
remote_plugins = {}
|
||||
|
||||
# Fetch depending on mode
|
||||
if self.app.services.mode == "remote":
|
||||
# For local we need to instantiate a local plugin service bypassing stub
|
||||
from ..services.plugin_service import PluginService
|
||||
local_svc = PluginService(self.app.services.config)
|
||||
local_plugins = local_svc.list_plugins()
|
||||
remote_plugins = self.app.services.plugins.list_plugins()
|
||||
else:
|
||||
local_plugins = self.app.services.plugins.list_plugins()
|
||||
|
||||
from rich.table import Table
|
||||
|
||||
table = Table(title="Available Plugins", show_header=True, header_style="bold cyan")
|
||||
table.add_column("Plugin", style="cyan")
|
||||
table.add_column("State", style="bold")
|
||||
table.add_column("Origin", style="magenta")
|
||||
|
||||
# Populate local plugins
|
||||
for name, details in local_plugins.items():
|
||||
state = "Disabled" if not details.get("enabled", True) else "Active"
|
||||
color = "red" if state == "Disabled" else "green"
|
||||
|
||||
if self.app.services.mode == "remote" and state == "Active":
|
||||
if self.app.plugins.preferences.get(name) == "remote":
|
||||
state = "Shadowed (Override by Remote)"
|
||||
color = "yellow"
|
||||
|
||||
table.add_row(name, f"[{color}]{state}[/{color}]", "Local")
|
||||
|
||||
# Populate remote plugins
|
||||
if self.app.services.mode == "remote":
|
||||
for name, details in remote_plugins.items():
|
||||
state = "Disabled" if not details.get("enabled", True) else "Active"
|
||||
color = "red" if state == "Disabled" else "green"
|
||||
|
||||
if state == "Active":
|
||||
pref = self.app.plugins.preferences.get(name, "local")
|
||||
# If preference isn't remote and the plugin exists locally, local takes priority
|
||||
if pref != "remote" and name in local_plugins:
|
||||
state = "Shadowed (Override by Local)"
|
||||
color = "yellow"
|
||||
|
||||
table.add_row(name, f"[{color}]{state}[/{color}]", "Remote")
|
||||
|
||||
if not local_plugins and not remote_plugins:
|
||||
printer.console.print(" No plugins found.")
|
||||
else:
|
||||
printer.console.print(table)
|
||||
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
@@ -1,96 +0,0 @@
|
||||
import sys
|
||||
import yaml
|
||||
import inquirer
|
||||
|
||||
from .. import printer
|
||||
from ..services.exceptions import ConnpyError, ProfileNotFoundError
|
||||
from .forms import Forms
|
||||
|
||||
class ProfileHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
self.forms = Forms(app)
|
||||
|
||||
def dispatch(self, args):
|
||||
if not self.app.case:
|
||||
args.data[0] = args.data[0].lower()
|
||||
actions = {"add": self.add, "del": self.delete, "mod": self.modify, "show": self.show}
|
||||
return actions.get(args.action)(args)
|
||||
|
||||
def delete(self, args):
|
||||
name = args.data[0]
|
||||
try:
|
||||
self.app.services.profiles.get_profile(name)
|
||||
except ProfileNotFoundError:
|
||||
printer.error(f"{name} not found")
|
||||
sys.exit(2)
|
||||
|
||||
if name == "default":
|
||||
printer.error("Can't delete default profile")
|
||||
sys.exit(6)
|
||||
|
||||
question = [inquirer.Confirm("delete", message=f"Are you sure you want to delete {name}?")]
|
||||
confirm = inquirer.prompt(question)
|
||||
if confirm == None or not confirm["delete"]:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
self.app.services.profiles.delete_profile(name)
|
||||
printer.success(f"{name} deleted successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(8)
|
||||
|
||||
def show(self, args):
|
||||
try:
|
||||
profile = self.app.services.profiles.get_profile(args.data[0])
|
||||
yaml_output = yaml.dump(profile, sort_keys=False, default_flow_style=False)
|
||||
printer.data(args.data[0], yaml_output)
|
||||
except ProfileNotFoundError:
|
||||
printer.error(f"{args.data[0]} not found")
|
||||
sys.exit(2)
|
||||
|
||||
def add(self, args):
|
||||
name = args.data[0]
|
||||
if name in self.app.services.profiles.list_profiles():
|
||||
printer.error(f"Profile '{name}' already exists.")
|
||||
sys.exit(4)
|
||||
|
||||
new_profile_data = self.forms.questions_profiles(name)
|
||||
if not new_profile_data:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
self.app.services.profiles.add_profile(name, new_profile_data)
|
||||
printer.success(f"{name} added successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def modify(self, args):
|
||||
name = args.data[0]
|
||||
try:
|
||||
profile = self.app.services.profiles.get_profile(name, resolve=False)
|
||||
except ProfileNotFoundError:
|
||||
printer.error(f"Profile '{name}' not found")
|
||||
sys.exit(2)
|
||||
|
||||
old_profile = {"id": name, **profile}
|
||||
edits = self.forms.questions_edit()
|
||||
if edits == None:
|
||||
sys.exit(7)
|
||||
|
||||
update_profile_data = self.forms.questions_profiles(name, edit=edits)
|
||||
if not update_profile_data:
|
||||
sys.exit(7)
|
||||
|
||||
if sorted(update_profile_data.items()) == sorted(old_profile.items()):
|
||||
printer.info("Nothing to do here")
|
||||
return
|
||||
|
||||
try:
|
||||
self.app.services.profiles.update_profile(name, update_profile_data)
|
||||
printer.success(f"{name} edited successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
@@ -1,167 +0,0 @@
|
||||
import os
|
||||
import sys
|
||||
import yaml
|
||||
import threading
|
||||
from rich.rule import Rule
|
||||
from .. import printer
|
||||
from ..services.exceptions import ConnpyError
|
||||
from .help_text import get_instructions
|
||||
|
||||
class RunHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
self.print_lock = threading.Lock()
|
||||
|
||||
def dispatch(self, args):
|
||||
if len(args.data) > 1:
|
||||
args.action = "noderun"
|
||||
actions = {"noderun": self.node_run, "generate": self.yaml_generate, "run": self.yaml_run}
|
||||
return actions.get(args.action)(args)
|
||||
|
||||
def node_run(self, args):
|
||||
nodes_filter = args.data[0]
|
||||
commands = [" ".join(args.data[1:])]
|
||||
|
||||
try:
|
||||
header_printed = False
|
||||
|
||||
if hasattr(args, 'test_expected') and args.test_expected:
|
||||
# Mode: Test
|
||||
def _on_node_complete(unique, node_output, node_status, node_result):
|
||||
nonlocal header_printed
|
||||
with self.print_lock:
|
||||
if not header_printed:
|
||||
printer.console.print(Rule("OUTPUT", style="header"))
|
||||
header_printed = True
|
||||
printer.test_panel(unique, node_output, node_status, node_result)
|
||||
|
||||
results = self.app.services.execution.test_commands(
|
||||
nodes_filter=nodes_filter,
|
||||
commands=commands,
|
||||
expected=args.test_expected,
|
||||
on_node_complete=_on_node_complete
|
||||
)
|
||||
printer.test_summary(results)
|
||||
else:
|
||||
# Mode: Normal Run
|
||||
def _on_node_complete(unique, node_output, node_status):
|
||||
nonlocal header_printed
|
||||
with self.print_lock:
|
||||
if not header_printed:
|
||||
printer.console.print(Rule("OUTPUT", style="header"))
|
||||
header_printed = True
|
||||
printer.node_panel(unique, node_output, node_status)
|
||||
|
||||
results = self.app.services.execution.run_commands(
|
||||
nodes_filter=nodes_filter,
|
||||
commands=commands,
|
||||
on_node_complete=_on_node_complete
|
||||
)
|
||||
printer.run_summary(results)
|
||||
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def yaml_generate(self, args):
|
||||
if os.path.exists(args.data[0]):
|
||||
printer.error(f"File '{args.data[0]}' already exists.")
|
||||
sys.exit(14)
|
||||
else:
|
||||
with open(args.data[0], "w") as file:
|
||||
file.write(get_instructions("generate"))
|
||||
printer.success(f"File {args.data[0]} generated successfully")
|
||||
sys.exit()
|
||||
|
||||
def yaml_run(self, args):
|
||||
path = args.data[0]
|
||||
try:
|
||||
with open(path, "r") as f:
|
||||
playbook = yaml.load(f, Loader=yaml.FullLoader)
|
||||
|
||||
for task in playbook.get("tasks", []):
|
||||
self.cli_run(task)
|
||||
|
||||
except Exception as e:
|
||||
printer.error(f"Failed to run playbook {path}: {e}")
|
||||
sys.exit(10)
|
||||
|
||||
def cli_run(self, script):
|
||||
name = script.get("name", "Task")
|
||||
try:
|
||||
action = script["action"]
|
||||
nodelist = script["nodes"]
|
||||
commands = script["commands"]
|
||||
variables = script.get("variables")
|
||||
output_cfg = script["output"]
|
||||
options = script.get("options", {})
|
||||
except KeyError as e:
|
||||
printer.error(f"[{name}] '{e.args[0]}' is mandatory in script")
|
||||
sys.exit(11)
|
||||
|
||||
stdout = (output_cfg == "stdout")
|
||||
folder = output_cfg if output_cfg not in [None, "stdout"] else None
|
||||
prompt = options.get("prompt")
|
||||
|
||||
try:
|
||||
header_printed = False
|
||||
if action == "run":
|
||||
# If stdout is true, we stream results as they arrive
|
||||
def _on_run_complete(unique, node_output, node_status):
|
||||
nonlocal header_printed
|
||||
if stdout:
|
||||
with self.print_lock:
|
||||
if not header_printed:
|
||||
printer.console.print(Rule(name.upper(), style="header"))
|
||||
header_printed = True
|
||||
printer.node_panel(unique, node_output, node_status)
|
||||
|
||||
results = self.app.services.execution.run_commands(
|
||||
nodes_filter=nodelist,
|
||||
commands=commands,
|
||||
variables=variables,
|
||||
parallel=options.get("parallel", 10),
|
||||
timeout=options.get("timeout", 10),
|
||||
folder=folder,
|
||||
prompt=prompt,
|
||||
on_node_complete=_on_run_complete
|
||||
)
|
||||
# Final Summary
|
||||
if not stdout and not folder:
|
||||
with self.print_lock:
|
||||
printer.console.print(Rule(name.upper(), style="header"))
|
||||
for unique, data in results.items():
|
||||
output = data["output"] if isinstance(data, dict) else data
|
||||
printer.node_panel(unique, output, 0)
|
||||
|
||||
# ALWAYS show the aggregate execution summary at the end
|
||||
printer.run_summary(results)
|
||||
|
||||
elif action == "test":
|
||||
expected = script.get("expected", [])
|
||||
# Show test_panel per node ONLY if stdout is True
|
||||
def _on_test_complete(unique, node_output, node_status, node_result):
|
||||
nonlocal header_printed
|
||||
if stdout:
|
||||
with self.print_lock:
|
||||
if not header_printed:
|
||||
printer.console.print(Rule(name.upper(), style="header"))
|
||||
header_printed = True
|
||||
printer.test_panel(unique, node_output, node_status, node_result)
|
||||
|
||||
results = self.app.services.execution.test_commands(
|
||||
nodes_filter=nodelist,
|
||||
commands=commands,
|
||||
expected=expected,
|
||||
variables=variables,
|
||||
parallel=options.get("parallel", 10),
|
||||
timeout=options.get("timeout", 10),
|
||||
folder=folder,
|
||||
prompt=prompt,
|
||||
on_node_complete=_on_test_complete
|
||||
)
|
||||
# ALWAYS show the aggregate summary at the end
|
||||
printer.test_summary(results)
|
||||
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
@@ -1,126 +0,0 @@
|
||||
import sys
|
||||
import yaml
|
||||
from .. import printer
|
||||
|
||||
class SyncHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
|
||||
def dispatch(self, args):
|
||||
action = getattr(args, "action", None)
|
||||
actions = {
|
||||
"login": self.login,
|
||||
"logout": self.logout,
|
||||
"status": self.status,
|
||||
"list": self.list_backups,
|
||||
"once": self.once,
|
||||
"restore": self.restore,
|
||||
"start": self.start,
|
||||
"stop": self.stop
|
||||
}
|
||||
handler = actions.get(action)
|
||||
if handler:
|
||||
return handler(args)
|
||||
|
||||
return self.status(args)
|
||||
|
||||
def login(self, args):
|
||||
self.app.services.sync.login()
|
||||
|
||||
def logout(self, args):
|
||||
self.app.services.sync.logout()
|
||||
|
||||
def status(self, args):
|
||||
status = self.app.services.sync.check_login_status()
|
||||
enabled = self.app.services.sync.sync_enabled
|
||||
remote = self.app.services.sync.sync_remote
|
||||
|
||||
printer.info(f"Login Status: {status}")
|
||||
printer.info(f"Auto-Sync: {'Enabled' if enabled else 'Disabled'}")
|
||||
printer.info(f"Sync Remote Nodes: {'Yes' if remote else 'No'}")
|
||||
|
||||
def list_backups(self, args):
|
||||
backups = self.app.services.sync.list_backups()
|
||||
if backups:
|
||||
yaml_output = yaml.dump(backups, sort_keys=False, default_flow_style=False)
|
||||
printer.custom("backups", "")
|
||||
print(yaml_output)
|
||||
else:
|
||||
printer.info("No backups found or not logged in.")
|
||||
|
||||
def once(self, args):
|
||||
# Manual backup. We check if we should include remote nodes
|
||||
remote_data = None
|
||||
if self.app.services.sync.sync_remote and self.app.services.mode == "remote":
|
||||
inventory = self.app.services.nodes.get_inventory()
|
||||
# Merge with local settings
|
||||
local_settings = self.app.services.config_svc.get_settings()
|
||||
local_settings.pop("configfolder", None)
|
||||
|
||||
# Maintain proper config structure: {config: {}, connections: {}, profiles: {}}
|
||||
remote_data = {
|
||||
"config": local_settings,
|
||||
"connections": inventory.get("connections", {}),
|
||||
"profiles": inventory.get("profiles", {})
|
||||
}
|
||||
|
||||
if self.app.services.sync.compress_and_upload(remote_data):
|
||||
printer.success("Manual backup completed.")
|
||||
|
||||
def restore(self, args):
|
||||
import inquirer
|
||||
file_id = getattr(args, "id", None)
|
||||
|
||||
# Segmented flags
|
||||
restore_config = getattr(args, "restore_config", False)
|
||||
restore_nodes = getattr(args, "restore_nodes", False)
|
||||
|
||||
# If neither is specified, we restore ALL (backwards compatibility)
|
||||
if not restore_config and not restore_nodes:
|
||||
restore_config = True
|
||||
restore_nodes = True
|
||||
|
||||
# 1. Analyze what we are about to restore
|
||||
info = self.app.services.sync.analyze_backup_content(file_id)
|
||||
if not info:
|
||||
printer.error("Could not analyze backup content.")
|
||||
return
|
||||
|
||||
# 2. Show detailed info
|
||||
printer.info("Restoration Details:")
|
||||
if restore_config:
|
||||
print(f" - Local Settings: Yes")
|
||||
print(f" - RSA Key (.osk): {'Yes' if info['has_key'] else 'No'}")
|
||||
if restore_nodes:
|
||||
target = "REMOTE" if self.app.services.mode == "remote" else "LOCAL"
|
||||
print(f" - Nodes: {info['nodes']}")
|
||||
print(f" - Folders: {info['folders']}")
|
||||
print(f" - Profiles: {info['profiles']}")
|
||||
print(f" - Destination: {target}")
|
||||
print("")
|
||||
|
||||
questions = [inquirer.Confirm("confirm", message="Do you want to proceed with the restoration?", default=False)]
|
||||
answers = inquirer.prompt(questions)
|
||||
|
||||
if not answers or not answers["confirm"]:
|
||||
printer.info("Restore cancelled.")
|
||||
return
|
||||
|
||||
# 3. Perform the actual restore
|
||||
if self.app.services.sync.restore_backup(
|
||||
file_id=file_id,
|
||||
restore_config=restore_config,
|
||||
restore_nodes=restore_nodes,
|
||||
app_instance=self.app
|
||||
):
|
||||
printer.success("Restore completed successfully.")
|
||||
|
||||
def start(self, args):
|
||||
self.app.services.config_svc.update_setting("sync", True)
|
||||
self.app.services.sync.sync_enabled = True
|
||||
printer.success("Auto-sync enabled.")
|
||||
|
||||
def stop(self, args):
|
||||
self.app.services.config_svc.update_setting("sync", False)
|
||||
self.app.services.sync.sync_enabled = False
|
||||
printer.success("Auto-sync disabled.")
|
||||
@@ -1,139 +0,0 @@
|
||||
import re
|
||||
import ast
|
||||
import inquirer
|
||||
|
||||
class Validators:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
|
||||
def host_validation(self, answers, current, regex = "^.+$"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Host cannot be empty")
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
return True
|
||||
|
||||
def profile_protocol_validation(self, answers, current, regex = "(^ssh$|^telnet$|^kubectl$|^docker$|^ssm$|^$)"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Pick between ssh, telnet, kubectl, docker, ssm or leave empty")
|
||||
return True
|
||||
|
||||
def protocol_validation(self, answers, current, regex = "(^ssh$|^telnet$|^kubectl$|^docker$|^ssm$|^$|^@.+$)"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Pick between ssh, telnet, kubectl, docker, ssm, leave empty or @profile")
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
return True
|
||||
|
||||
def profile_port_validation(self, answers, current, regex = "(^[0-9]*$)"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Pick a port between 1-65535, @profile o leave empty")
|
||||
try:
|
||||
port = int(current)
|
||||
except ValueError:
|
||||
port = 0
|
||||
if current != "" and not 1 <= int(port) <= 65535:
|
||||
raise inquirer.errors.ValidationError("", reason="Pick a port between 1-65535 or leave empty")
|
||||
return True
|
||||
|
||||
def port_validation(self, answers, current, regex = "(^[0-9]*$|^@.+$)"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Pick a port between 1-65535, @profile or leave empty")
|
||||
try:
|
||||
port = int(current)
|
||||
except ValueError:
|
||||
port = 0
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
elif current != "" and not 1 <= int(port) <= 65535:
|
||||
raise inquirer.errors.ValidationError("", reason="Pick a port between 1-65535, @profile o leave empty")
|
||||
return True
|
||||
|
||||
def pass_validation(self, answers, current, regex = "(^@.+$)"):
|
||||
profiles = current.split(",")
|
||||
for i in profiles:
|
||||
if not re.match(regex, i) or i[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(i))
|
||||
return True
|
||||
|
||||
def tags_validation(self, answers, current):
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
elif current != "":
|
||||
isdict = False
|
||||
try:
|
||||
isdict = ast.literal_eval(current)
|
||||
except Exception:
|
||||
pass
|
||||
if not isinstance (isdict, dict):
|
||||
raise inquirer.errors.ValidationError("", reason="Tags should be a python dictionary.".format(current))
|
||||
return True
|
||||
|
||||
def profile_tags_validation(self, answers, current):
|
||||
if current != "":
|
||||
isdict = False
|
||||
try:
|
||||
isdict = ast.literal_eval(current)
|
||||
except Exception:
|
||||
pass
|
||||
if not isinstance (isdict, dict):
|
||||
raise inquirer.errors.ValidationError("", reason="Tags should be a python dictionary.".format(current))
|
||||
return True
|
||||
|
||||
def jumphost_validation(self, answers, current):
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
elif current != "":
|
||||
if current not in self.app.nodes_list:
|
||||
raise inquirer.errors.ValidationError("", reason="Node {} don't exist.".format(current))
|
||||
return True
|
||||
|
||||
def profile_jumphost_validation(self, answers, current):
|
||||
if current != "":
|
||||
if current not in self.app.nodes_list:
|
||||
raise inquirer.errors.ValidationError("", reason="Node {} don't exist.".format(current))
|
||||
return True
|
||||
|
||||
def default_validation(self, answers, current):
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
return True
|
||||
|
||||
def bulk_node_validation(self, answers, current, regex = "^[0-9a-zA-Z_.,$#-]+$"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Host cannot be empty")
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
return True
|
||||
|
||||
def bulk_folder_validation(self, answers, current):
|
||||
if not self.app.case:
|
||||
current = current.lower()
|
||||
|
||||
candidate = current
|
||||
if "/" in current:
|
||||
candidate = current.split("/")[0]
|
||||
|
||||
matches = list(filter(lambda k: k == candidate, self.app.folders))
|
||||
if current != "" and len(matches) == 0:
|
||||
raise inquirer.errors.ValidationError("", reason="Location {} don't exist".format(current))
|
||||
return True
|
||||
|
||||
def bulk_host_validation(self, answers, current, regex = "^.+$"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Host cannot be empty")
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
hosts = current.split(",")
|
||||
nodes = answers["ids"].split(",")
|
||||
if len(hosts) > 1 and len(hosts) != len(nodes):
|
||||
raise inquirer.errors.ValidationError("", reason="Hosts list should be the same length of nodes list")
|
||||
return True
|
||||
+116
-338
@@ -1,23 +1,39 @@
|
||||
import sys
|
||||
import os
|
||||
import json
|
||||
import glob
|
||||
import importlib.util
|
||||
|
||||
def load_txt_cache(filepath):
|
||||
try:
|
||||
with open(filepath, "r") as f:
|
||||
return f.read().splitlines()
|
||||
except FileNotFoundError:
|
||||
return []
|
||||
def _getallnodes(config):
|
||||
#get all nodes on configfile
|
||||
nodes = []
|
||||
layer1 = [k for k,v in config["connections"].items() if isinstance(v, dict) and v["type"] == "connection"]
|
||||
folders = [k for k,v in config["connections"].items() if isinstance(v, dict) and v["type"] == "folder"]
|
||||
nodes.extend(layer1)
|
||||
for f in folders:
|
||||
layer2 = [k + "@" + f for k,v in config["connections"][f].items() if isinstance(v, dict) and v["type"] == "connection"]
|
||||
nodes.extend(layer2)
|
||||
subfolders = [k for k,v in config["connections"][f].items() if isinstance(v, dict) and v["type"] == "subfolder"]
|
||||
for s in subfolders:
|
||||
layer3 = [k + "@" + s + "@" + f for k,v in config["connections"][f][s].items() if isinstance(v, dict) and v["type"] == "connection"]
|
||||
nodes.extend(layer3)
|
||||
return nodes
|
||||
|
||||
def get_cwd(words, option=None, folderonly=False):
|
||||
import glob
|
||||
def _getallfolders(config):
|
||||
#get all folders on configfile
|
||||
folders = ["@" + k for k,v in config["connections"].items() if isinstance(v, dict) and v["type"] == "folder"]
|
||||
subfolders = []
|
||||
for f in folders:
|
||||
s = ["@" + k + f for k,v in config["connections"][f[1:]].items() if isinstance(v, dict) and v["type"] == "subfolder"]
|
||||
subfolders.extend(s)
|
||||
folders.extend(subfolders)
|
||||
return folders
|
||||
|
||||
def _getcwd(words, option, folderonly=False):
|
||||
# Expand tilde to home directory if present
|
||||
if words[-1].startswith("~"):
|
||||
words[-1] = os.path.expanduser(words[-1])
|
||||
|
||||
# If option is not provided, try to infer it from the first word
|
||||
if option is None and words:
|
||||
option = words[0]
|
||||
|
||||
if words[-1] == option:
|
||||
path = './*'
|
||||
else:
|
||||
@@ -35,21 +51,6 @@ def get_cwd(words, option=None, folderonly=False):
|
||||
def _get_plugins(which, defaultdir):
|
||||
# Path to core_plugins relative to this script
|
||||
core_path = os.path.dirname(os.path.realpath(__file__)) + "/core_plugins"
|
||||
remote_path = os.path.join(defaultdir, "remote_plugins")
|
||||
|
||||
# Load preferences
|
||||
import json
|
||||
pref_path = os.path.join(defaultdir, "plugin_preferences.json")
|
||||
try:
|
||||
with open(pref_path) as f:
|
||||
preferences = json.load(f)
|
||||
except Exception:
|
||||
preferences = {}
|
||||
|
||||
# Load service mode
|
||||
# We try to infer if we are in remote mode by checking config.yaml or .folder
|
||||
# but for completion usually we just want to know if remote cache exists.
|
||||
# However, to be strict we should check preferences.
|
||||
|
||||
def get_plugins_from_directory(directory):
|
||||
enabled_files = []
|
||||
@@ -60,38 +61,21 @@ def _get_plugins(which, defaultdir):
|
||||
for file in os.listdir(directory):
|
||||
# Check if the file is a Python file
|
||||
if file.endswith('.py'):
|
||||
name = os.path.splitext(file)[0]
|
||||
enabled_files.append(name)
|
||||
all_plugins[name] = os.path.join(directory, file)
|
||||
enabled_files.append(os.path.splitext(file)[0])
|
||||
all_plugins[os.path.splitext(file)[0]] = os.path.join(directory, file)
|
||||
# Check if the file is a Python backup file
|
||||
elif file.endswith('.py.bkp'):
|
||||
name = os.path.splitext(os.path.splitext(file)[0])[0]
|
||||
disabled_files.append(name)
|
||||
disabled_files.append(os.path.splitext(os.path.splitext(file)[0])[0])
|
||||
return enabled_files, disabled_files, all_plugins
|
||||
|
||||
# Get plugins from all directories
|
||||
# Get plugins from both directories
|
||||
user_enabled, user_disabled, user_all_plugins = get_plugins_from_directory(defaultdir + "/plugins")
|
||||
core_enabled, core_disabled, core_all_plugins = get_plugins_from_directory(core_path)
|
||||
remote_enabled, remote_disabled, remote_all_plugins = get_plugins_from_directory(remote_path)
|
||||
|
||||
# Calculate final paths respecting priorities and preferences
|
||||
# Priority: User Local > Core Local > Remote (unless preferred)
|
||||
|
||||
# Start with core
|
||||
final_all_plugins = core_all_plugins.copy()
|
||||
# Override with user local
|
||||
final_all_plugins.update(user_all_plugins)
|
||||
|
||||
# For remote, we only use them if:
|
||||
# 1. They don't exist locally OR
|
||||
# 2. Preference is explicitly 'remote'
|
||||
for name, path in remote_all_plugins.items():
|
||||
if name not in final_all_plugins or preferences.get(name) == "remote":
|
||||
final_all_plugins[name] = path
|
||||
|
||||
# Combine enabled/disabled for the helper commands
|
||||
enabled_files = list(set(user_enabled + core_enabled + [k for k,v in remote_all_plugins.items() if preferences.get(k) == "remote"]))
|
||||
disabled_files = list(set(user_disabled + core_disabled))
|
||||
# Combine the results from user and core plugins
|
||||
enabled_files = user_enabled
|
||||
disabled_files = user_disabled
|
||||
all_plugins = {**user_all_plugins, **core_all_plugins} # Merge dictionaries
|
||||
|
||||
# Return based on the command
|
||||
if which == "--disable":
|
||||
@@ -102,238 +86,7 @@ def _get_plugins(which, defaultdir):
|
||||
all_files = enabled_files + disabled_files
|
||||
return all_files
|
||||
elif which == "all":
|
||||
return final_all_plugins
|
||||
|
||||
|
||||
def _build_tree(nodes, folders, profiles, plugins, configdir):
|
||||
"""Build the declarative CLI navigation tree.
|
||||
|
||||
Structure:
|
||||
- dict: keys are completions + subnavigation.
|
||||
"__extra__" adds dynamic data.
|
||||
"__exclude_used__" filters already-typed words.
|
||||
"*" absorbs unknown positional words and loops to a specific node.
|
||||
- list: static choice completions.
|
||||
- callable: dynamic completions (called with `words`, returns list).
|
||||
- None: no further completions.
|
||||
"""
|
||||
_nodes = lambda w=None: list(nodes)
|
||||
_folders = lambda w=None: list(folders)
|
||||
_profiles = lambda w=None: list(profiles)
|
||||
_nodes_folders = lambda w=None: list(nodes) + list(folders)
|
||||
|
||||
_profile_values = {"__extra__": _profiles}
|
||||
|
||||
# --- Stateful/Looping Nodes ---
|
||||
|
||||
# list nodes
|
||||
list_nodes = {"__exclude_used__": True}
|
||||
list_nodes.update({
|
||||
"--format": {"*": list_nodes},
|
||||
"--filter": {"*": list_nodes},
|
||||
"*": list_nodes
|
||||
})
|
||||
|
||||
# export / import / run loops
|
||||
export_dict = {"--help": None, "-h": None}
|
||||
export_dict.update({
|
||||
"*": export_dict,
|
||||
"__extra__": lambda w: get_cwd(w, "export", True) + [f for f in folders if not any(x in f for x in w[1:-1])]
|
||||
})
|
||||
|
||||
import_dict = {"--help": None, "-h": None}
|
||||
import_dict.update({
|
||||
"*": import_dict,
|
||||
"__extra__": lambda w: get_cwd(w, "import")
|
||||
})
|
||||
|
||||
# --- Run Loop ---
|
||||
# After the first positional argument (Node filter or YAML file),
|
||||
# we stop suggesting nodes and only allow flags or commands.
|
||||
run_after_node = {"--help": None, "-h": None}
|
||||
run_after_node.update({
|
||||
"--test": {"*": run_after_node},
|
||||
"-t": {"*": run_after_node},
|
||||
"*": run_after_node # Consume commands
|
||||
})
|
||||
|
||||
run_dict = {
|
||||
"--generate": {"__extra__": lambda w: get_cwd(w, "--generate")},
|
||||
"-g": {"__extra__": lambda w: get_cwd(w, "-g")},
|
||||
"--test": {"*": None},
|
||||
"-t": {"*": None},
|
||||
"--help": None,
|
||||
"-h": None,
|
||||
"__extra__": lambda w: get_cwd(w, "run") + list(nodes),
|
||||
"*": run_after_node
|
||||
}
|
||||
|
||||
# State Machine Definitions
|
||||
ai_dict = {"__exclude_used__": True, "--help": None, "-h": None}
|
||||
for opt in ["--engineer-model", "--engineer-api-key", "--architect-model", "--architect-api-key"]:
|
||||
ai_dict[opt] = {"*": ai_dict} # takes value, loops back
|
||||
for opt in ["--debug", "--trust", "--list", "--list-sessions", "--session", "--resume", "--delete", "--delete-session", "-y"]:
|
||||
ai_dict[opt] = ai_dict # takes no value, loops back
|
||||
ai_dict["*"] = ai_dict
|
||||
|
||||
mv_state = {"__extra__": _nodes, "--help": None, "-h": None}
|
||||
cp_state = {"__extra__": _nodes, "--help": None, "-h": None}
|
||||
ls_state = {
|
||||
"profiles": None,
|
||||
"nodes": list_nodes,
|
||||
"folders": None,
|
||||
}
|
||||
|
||||
# --- Connect (default command) ---
|
||||
# Long flags are offered; short forms (-d/-t) only used for navigation.
|
||||
# Two states: before node (offer nodes + remaining long flags)
|
||||
# after node (offer only remaining long flags, no more nodes)
|
||||
connect_flags_long = ["--debug", "--sftp"]
|
||||
connect_flags_all = ["--debug", "-d", "--sftp", "-t"]
|
||||
|
||||
# Post-node: only offer remaining long flags
|
||||
connect_after_node = {"__exclude_used__": True}
|
||||
for f in connect_flags_all:
|
||||
connect_after_node[f] = connect_after_node
|
||||
|
||||
# Pre-node: offer nodes + remaining long flags, consume node → post-node state
|
||||
connect_dict = {"__exclude_used__": True}
|
||||
connect_dict["__extra__"] = lambda w: (
|
||||
list(nodes) + list(folders) + (list(plugins.keys()) if plugins else [])
|
||||
)
|
||||
connect_dict["*"] = connect_after_node
|
||||
for f in connect_flags_all:
|
||||
connect_dict[f] = connect_dict
|
||||
|
||||
# --- Main Tree ---
|
||||
return {
|
||||
# Root: offer nodes + long flags; after a node go to post-node state
|
||||
"__extra__": lambda w: list(nodes) + list(folders) + (list(plugins.keys()) if plugins else []),
|
||||
"*": connect_after_node,
|
||||
|
||||
"--debug": connect_dict,
|
||||
"-d": connect_dict,
|
||||
"--sftp": connect_dict,
|
||||
"-t": connect_dict,
|
||||
|
||||
"--add": {"profile": _profile_values},
|
||||
"--del": {"profile": _profile_values, "__extra__": _nodes_folders},
|
||||
"--rm": {"profile": _profile_values, "__extra__": _nodes_folders},
|
||||
"--edit": {"profile": _profile_values, "__extra__": _nodes},
|
||||
"--mod": {"profile": _profile_values, "__extra__": _nodes},
|
||||
"--show": {"profile": _profile_values, "__extra__": _nodes},
|
||||
"--help": None,
|
||||
|
||||
"-a": {"profile": _profile_values},
|
||||
"-r": {"profile": _profile_values, "__extra__": _nodes_folders},
|
||||
"-e": {"profile": _profile_values, "__extra__": _nodes},
|
||||
"-s": {"profile": _profile_values, "__extra__": _nodes},
|
||||
|
||||
"profile": {
|
||||
"--add": None, "--rm": _profiles, "--del": _profiles,
|
||||
"--edit": _profiles, "--mod": _profiles, "--show": _profiles,
|
||||
"--help": None,
|
||||
"-a": None, "-r": _profiles, "-e": _profiles, "-s": _profiles, "-h": None,
|
||||
},
|
||||
"move": mv_state,
|
||||
"mv": mv_state,
|
||||
"copy": cp_state,
|
||||
"cp": cp_state,
|
||||
|
||||
"list": ls_state,
|
||||
"ls": ls_state,
|
||||
|
||||
"bulk": {"--file": None, "--help": None, "-f": None, "-h": None},
|
||||
"run": run_dict,
|
||||
"export": export_dict,
|
||||
"import": import_dict,
|
||||
"ai": ai_dict,
|
||||
|
||||
"api": {
|
||||
"--start": None, "--restart": None, "--stop": None, "--debug": None,
|
||||
"--help": None,
|
||||
"-s": None, "-r": None, "-x": None, "-d": None, "-h": None,
|
||||
},
|
||||
"context": {
|
||||
"--add": None, "--rm": None, "--del": None,
|
||||
"--ls": None, "--set": None,
|
||||
"--show": None, "--edit": None, "--mod": None,
|
||||
"--help": None,
|
||||
"-a": None, "-r": None, "-s": None, "-e": None, "-h": None,
|
||||
},
|
||||
"plugin": {
|
||||
"--add": {"*": lambda w: get_cwd(w, "--add")},
|
||||
"--update": {"*": lambda w: get_cwd(w, "--update")},
|
||||
"--del": lambda w: _get_plugins("--del", configdir),
|
||||
"--enable": lambda w: _get_plugins("--enable", configdir),
|
||||
"--disable": lambda w: _get_plugins("--disable", configdir),
|
||||
"--list": None, "--help": None,
|
||||
"-h": None,
|
||||
},
|
||||
"config": {
|
||||
"--allow-uppercase": ["true", "false"],
|
||||
"--fzf": ["true", "false"],
|
||||
"--keepalive": None,
|
||||
"--completion": ["bash", "zsh"],
|
||||
"--fzf-wrapper": ["bash", "zsh"],
|
||||
"--configfolder": lambda w: get_cwd(w, "--configfolder", True),
|
||||
"--engineer-model": None, "--engineer-api-key": None,
|
||||
"--architect-model": None, "--architect-api-key": None,
|
||||
"--theme": None,
|
||||
"--service-mode": ["local", "remote"],
|
||||
"--remote": None,
|
||||
"--sync-remote": ["true", "false"],
|
||||
"--trusted-commands": None,
|
||||
"--help": None, "-h": None,
|
||||
},
|
||||
"sync": {
|
||||
"--login": None, "--logout": None,
|
||||
"--status": None, "--list": None,
|
||||
"--once": None, "--restore": None,
|
||||
"--start": None, "--stop": None,
|
||||
"--id": None, "--nodes": None, "--config": None,
|
||||
"--help": None, "-h": None,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def resolve_completion(words, tree):
|
||||
"""Navigate the tree following typed words, properly handling dynamic state loops."""
|
||||
current = tree
|
||||
for word in words[:-1]:
|
||||
if isinstance(current, dict):
|
||||
if word in current:
|
||||
current = current[word]
|
||||
elif "*" in current:
|
||||
current = current["*"]
|
||||
else:
|
||||
return []
|
||||
else:
|
||||
return []
|
||||
|
||||
results = []
|
||||
if isinstance(current, dict):
|
||||
results = [k for k in current
|
||||
if not k.startswith("__")
|
||||
and not k.startswith("*")
|
||||
and not (len(k) == 2 and k in ["mv", "cp", "ls"])
|
||||
and not (len(k) == 2 and k[0] == "-" and k[1] != "-")]
|
||||
|
||||
if current.get("__exclude_used__"):
|
||||
results = [r for r in results if r not in words[:-1]]
|
||||
|
||||
extra = current.get("__extra__")
|
||||
if callable(extra):
|
||||
results.extend(extra(words))
|
||||
elif isinstance(extra, list):
|
||||
results.extend(extra)
|
||||
elif isinstance(current, list):
|
||||
results = list(current)
|
||||
elif callable(current):
|
||||
results = list(current(words))
|
||||
|
||||
return results
|
||||
|
||||
return all_plugins
|
||||
|
||||
def main():
|
||||
home = os.path.expanduser("~")
|
||||
@@ -342,17 +95,17 @@ def main():
|
||||
try:
|
||||
with open(pathfile, "r") as f:
|
||||
configdir = f.read().strip()
|
||||
except (FileNotFoundError, IOError):
|
||||
except:
|
||||
configdir = defaultdir
|
||||
cachefile = configdir + '/.config.cache.json'
|
||||
|
||||
nodes = load_txt_cache(configdir + '/.fzf_nodes_cache.txt')
|
||||
folders = load_txt_cache(configdir + '/.folders_cache.txt')
|
||||
profiles = load_txt_cache(configdir + '/.profiles_cache.txt')
|
||||
plugins = _get_plugins("all", configdir)
|
||||
|
||||
defaultfile = configdir + '/config.json'
|
||||
jsonconf = open(defaultfile)
|
||||
config = json.load(jsonconf)
|
||||
nodes = _getallnodes(config)
|
||||
folders = _getallfolders(config)
|
||||
profiles = list(config["profiles"].keys())
|
||||
plugins = _get_plugins("all", defaultdir)
|
||||
info = {}
|
||||
info["config"] = None
|
||||
info["config"] = config
|
||||
info["nodes"] = nodes
|
||||
info["folders"] = folders
|
||||
info["profiles"] = profiles
|
||||
@@ -364,62 +117,87 @@ def main():
|
||||
positions = [1,3]
|
||||
wordsnumber = int(sys.argv[positions[0]])
|
||||
words = sys.argv[positions[1]:]
|
||||
if wordsnumber == 2:
|
||||
strings=["--add", "--del", "--rm", "--edit", "--mod", "--show", "mv", "move", "ls", "list", "cp", "copy", "profile", "run", "bulk", "config", "api", "ai", "export", "import", "--help", "plugin"]
|
||||
if plugins:
|
||||
strings.extend(plugins.keys())
|
||||
strings.extend(nodes)
|
||||
strings.extend(folders)
|
||||
|
||||
# --- Plugin completion ---
|
||||
# Try new tree API first: _connpy_tree integrates into the main tree.
|
||||
# Fall back to legacy _connpy_completion for older plugins.
|
||||
if wordsnumber >= 3 and plugins and words[0] in plugins:
|
||||
import importlib.util
|
||||
plugin_path = plugins[words[0]]
|
||||
elif wordsnumber >=3 and words[0] in plugins.keys():
|
||||
try:
|
||||
spec = importlib.util.spec_from_file_location("module.name", plugin_path)
|
||||
spec = importlib.util.spec_from_file_location("module.name", plugins[words[0]])
|
||||
module = importlib.util.module_from_spec(spec)
|
||||
spec.loader.exec_module(module)
|
||||
module.get_cwd = get_cwd
|
||||
except Exception:
|
||||
plugin_completion = getattr(module, "_connpy_completion")
|
||||
strings = plugin_completion(wordsnumber, words, info)
|
||||
except:
|
||||
exit()
|
||||
|
||||
# New API: _connpy_tree → integrate into main tree and use resolver
|
||||
if hasattr(module, "_connpy_tree"):
|
||||
plugin_node = module._connpy_tree(info)
|
||||
tree = _build_tree(nodes, folders, profiles, plugins, configdir)
|
||||
tree[words[0]] = plugin_node
|
||||
strings = resolve_completion(words, tree)
|
||||
|
||||
# Legacy API: _connpy_completion → delegate entirely
|
||||
elif hasattr(module, "_connpy_completion"):
|
||||
import json
|
||||
try:
|
||||
with open(cachefile, "r") as jsonconf:
|
||||
info["config"] = json.load(jsonconf)
|
||||
except Exception:
|
||||
try:
|
||||
import yaml
|
||||
with open(configdir + '/config.yaml', "r") as yamlconf:
|
||||
info["config"] = yaml.safe_load(yamlconf)
|
||||
except Exception:
|
||||
info["config"] = {}
|
||||
try:
|
||||
plugin_completion = getattr(module, "_connpy_completion")
|
||||
strings = plugin_completion(wordsnumber, words, info)
|
||||
except Exception:
|
||||
exit()
|
||||
elif wordsnumber >= 3 and words[0] == "ai":
|
||||
if wordsnumber == 3:
|
||||
strings = ["--help", "--org", "--model", "--api_key"]
|
||||
else:
|
||||
exit()
|
||||
strings = ["--org", "--model", "--api_key"]
|
||||
elif wordsnumber == 3:
|
||||
strings=[]
|
||||
if words[0] == "profile":
|
||||
strings=["--add", "--rm", "--del", "--edit", "--mod", "--show", "--help"]
|
||||
if words[0] == "config":
|
||||
strings=["--allow-uppercase", "--keepalive", "--completion", "--fzf", "--configfolder", "--openai-org", "--openai-org-api-key", "--openai-org-model","--help"]
|
||||
if words[0] == "api":
|
||||
strings=["--start", "--stop", "--restart", "--debug", "--help"]
|
||||
if words[0] in ["--mod", "--edit", "-e", "--show", "-s", "--add", "-a", "--rm", "--del", "-r"]:
|
||||
strings=["profile"]
|
||||
if words[0] in ["list", "ls"]:
|
||||
strings=["profiles", "nodes", "folders"]
|
||||
if words[0] in ["bulk", "mv", "cp", "copy"]:
|
||||
strings=["--help"]
|
||||
if words[0] in ["--rm", "--del", "-r"]:
|
||||
strings.extend(folders)
|
||||
if words[0] in ["--rm", "--del", "-r", "--mod", "--edit", "-e", "--show", "-s", "mv", "move", "cp", "copy"]:
|
||||
strings.extend(nodes)
|
||||
if words[0] == "plugin":
|
||||
strings = ["--help", "--add", "--update", "--del", "--enable", "--disable", "--list"]
|
||||
if words[0] in ["run", "import", "export"]:
|
||||
strings = ["--help"]
|
||||
if words[0] == "export":
|
||||
pathstrings = _getcwd(words, words[0], True)
|
||||
else:
|
||||
pathstrings = _getcwd(words, words[0])
|
||||
strings.extend(pathstrings)
|
||||
if words[0] == "run":
|
||||
strings.extend(nodes)
|
||||
|
||||
# --- Tree-based completion ---
|
||||
elif wordsnumber >= 4 and words[0] == "export" and words[1] != "--help":
|
||||
strings = [item for item in folders if not any(word in item for word in words[:-1])]
|
||||
|
||||
elif wordsnumber >= 4 and words[0] in ["list", "ls"] and words[1] == "nodes":
|
||||
options = ["--format", "--filter"]
|
||||
strings = [item for item in options if not any(word in item for word in words[:-1])]
|
||||
|
||||
elif wordsnumber == 4:
|
||||
strings=[]
|
||||
if words[0] == "profile" and words[1] in ["--rm", "--del", "-r", "--mod", "--edit", "-e", "--show", "-s"]:
|
||||
strings.extend(profiles)
|
||||
if words[1] == "profile" and words[0] in ["--rm", "--del", "-r", "--mod", "--edit", "-e", "--show", "-s"]:
|
||||
strings.extend(profiles)
|
||||
if words[0] == "config" and words[1] == "--completion":
|
||||
strings=["bash", "zsh"]
|
||||
if words[0] == "config" and words[1] in ["--fzf", "--allow-uppercase"]:
|
||||
strings=["true", "false"]
|
||||
if words[0] == "config" and words[1] in ["--configfolder"]:
|
||||
strings=_getcwd(words,words[1],True)
|
||||
if words[0] == "plugin" and words[1] in ["--update", "--del", "--enable", "--disable"]:
|
||||
strings=_get_plugins(words[1], defaultdir)
|
||||
|
||||
elif wordsnumber == 5 and words[0] == "plugin" and words[1] in ["--add", "--update"]:
|
||||
strings=_getcwd(words, words[2])
|
||||
else:
|
||||
tree = _build_tree(nodes, folders, profiles, plugins, configdir)
|
||||
strings = resolve_completion(words, tree)
|
||||
exit()
|
||||
|
||||
current_word = words[-1] if len(words) > 0 else ""
|
||||
matches = [s for s in strings if s.startswith(current_word)]
|
||||
|
||||
if app == "bash":
|
||||
strings = [s if s.endswith('/') else f"'{s} '" for s in matches]
|
||||
else:
|
||||
strings = matches
|
||||
|
||||
strings = [s if s.endswith('/') else f"'{s} '" for s in strings]
|
||||
print('\t'.join(strings))
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
||||
+100
-270
@@ -3,19 +3,15 @@
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
import yaml
|
||||
import shutil
|
||||
from Crypto.PublicKey import RSA
|
||||
from Crypto.Cipher import PKCS1_OAEP
|
||||
from pathlib import Path
|
||||
from copy import deepcopy
|
||||
from .hooks import MethodHook, ClassHook
|
||||
from . import printer
|
||||
|
||||
class NoAliasDumper(yaml.SafeDumper):
|
||||
def ignore_aliases(self, data):
|
||||
return True
|
||||
|
||||
|
||||
#functions and classes
|
||||
|
||||
@ClassHook
|
||||
class configfile:
|
||||
@@ -49,7 +45,7 @@ class configfile:
|
||||
### Optional Parameters:
|
||||
|
||||
- conf (str): Path/file to config file. If left empty default
|
||||
path is ~/.config/conn/config.yaml
|
||||
path is ~/.config/conn/config.json
|
||||
|
||||
- key (str): Path/file to RSA key file. If left empty default
|
||||
path is ~/.config/conn/.osk
|
||||
@@ -57,207 +53,77 @@ class configfile:
|
||||
'''
|
||||
home = os.path.expanduser("~")
|
||||
defaultdir = home + '/.config/conn'
|
||||
|
||||
if conf is None:
|
||||
# Standard path: use ~/.config/conn and respect .folder redirection
|
||||
self.anchor_path = defaultdir
|
||||
self.defaultdir = defaultdir
|
||||
Path(defaultdir).mkdir(parents=True, exist_ok=True)
|
||||
|
||||
pathfile = defaultdir + '/.folder'
|
||||
try:
|
||||
with open(pathfile, "r") as f:
|
||||
configdir = f.read().strip()
|
||||
except (FileNotFoundError, IOError):
|
||||
with open(pathfile, "w") as f:
|
||||
f.write(str(defaultdir))
|
||||
configdir = defaultdir
|
||||
|
||||
self.defaultdir = configdir
|
||||
self.file = configdir + '/config.yaml'
|
||||
self.key = key or (configdir + '/.osk')
|
||||
|
||||
# Ensure redirected directories exist
|
||||
Path(configdir).mkdir(parents=True, exist_ok=True)
|
||||
Path(f"{configdir}/plugins").mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Backwards compatibility: Migrate from JSON to YAML only for default path
|
||||
legacy_json = configdir + '/config.json'
|
||||
legacy_noext = configdir + '/config'
|
||||
legacy_file = None
|
||||
if os.path.exists(legacy_json): legacy_file = legacy_json
|
||||
elif os.path.exists(legacy_noext): legacy_file = legacy_noext
|
||||
|
||||
if not os.path.exists(self.file) and legacy_file:
|
||||
try:
|
||||
with open(legacy_file, 'r') as f:
|
||||
old_data = json.load(f)
|
||||
if not self._validate_config(old_data):
|
||||
printer.warning(f"Legacy config {legacy_file} has invalid structure, skipping migration.")
|
||||
else:
|
||||
with open(self.file, 'w') as f:
|
||||
yaml.dump(old_data, f, Dumper=NoAliasDumper, default_flow_style=False, sort_keys=False)
|
||||
# Verify the written YAML can be read back correctly
|
||||
with open(self.file, 'r') as f:
|
||||
verify = yaml.safe_load(f)
|
||||
if not self._validate_config(verify):
|
||||
os.remove(self.file)
|
||||
printer.warning("YAML verification failed after migration, keeping legacy config.")
|
||||
else:
|
||||
# Note: cachefile is derived later, we use temp one for migration sync
|
||||
temp_cache = configdir + '/.config.cache.json'
|
||||
with open(temp_cache, 'w') as f:
|
||||
json.dump(old_data, f)
|
||||
shutil.move(legacy_file, legacy_file + ".backup")
|
||||
printer.success(f"Migrated legacy config ({len(old_data.get('connections',{}))} folders/nodes) into YAML and Cache successfully!")
|
||||
except Exception as e:
|
||||
if os.path.exists(self.file):
|
||||
try: os.remove(self.file)
|
||||
except OSError: pass
|
||||
printer.warning(f"Failed to migrate legacy config: {e}")
|
||||
self.defaultdir = defaultdir
|
||||
Path(defaultdir).mkdir(parents=True, exist_ok=True)
|
||||
Path(f"{defaultdir}/plugins").mkdir(parents=True, exist_ok=True)
|
||||
pathfile = defaultdir + '/.folder'
|
||||
try:
|
||||
with open(pathfile, "r") as f:
|
||||
configdir = f.read().strip()
|
||||
except:
|
||||
with open(pathfile, "w") as f:
|
||||
f.write(str(defaultdir))
|
||||
configdir = defaultdir
|
||||
defaultfile = configdir + '/config.json'
|
||||
defaultkey = configdir + '/.osk'
|
||||
if conf == None:
|
||||
self.file = defaultfile
|
||||
else:
|
||||
# Custom path (common in tests): isolate everything to the conf parent directory
|
||||
self.file = os.path.abspath(conf)
|
||||
configdir = os.path.dirname(self.file)
|
||||
self.anchor_path = configdir
|
||||
self.defaultdir = configdir
|
||||
self.key = os.path.abspath(key) if key else (configdir + '/.osk')
|
||||
|
||||
# Sidecar files always live next to the config file (or in the redirected configdir)
|
||||
self.cachefile = configdir + '/.config.cache.json'
|
||||
self.fzf_cachefile = configdir + '/.fzf_nodes_cache.txt'
|
||||
self.folders_cachefile = configdir + '/.folders_cache.txt'
|
||||
self.profiles_cachefile = configdir + '/.profiles_cache.txt'
|
||||
|
||||
self.file = conf
|
||||
if key == None:
|
||||
self.key = defaultkey
|
||||
else:
|
||||
self.key = key
|
||||
if os.path.exists(self.file):
|
||||
config = self._loadconfig(self.file)
|
||||
else:
|
||||
config = self._createconfig(self.file)
|
||||
|
||||
self.config = config["config"]
|
||||
self.connections = config["connections"]
|
||||
self.profiles = config["profiles"]
|
||||
|
||||
if not os.path.exists(self.key):
|
||||
self._createkey(self.key)
|
||||
with open(self.key) as f:
|
||||
self.privatekey = RSA.import_key(f.read())
|
||||
f.close()
|
||||
self.publickey = self.privatekey.publickey()
|
||||
|
||||
# Self-heal text caches if they are missing
|
||||
if not os.path.exists(self.fzf_cachefile) or not os.path.exists(self.folders_cachefile) or not os.path.exists(self.profiles_cachefile):
|
||||
self._generate_nodes_cache()
|
||||
|
||||
|
||||
def _validate_config(self, data):
|
||||
"""Verify config data has the required structure."""
|
||||
if not isinstance(data, dict):
|
||||
return False
|
||||
required = {"config", "connections", "profiles"}
|
||||
return required.issubset(data.keys())
|
||||
|
||||
def _loadconfig(self, conf):
|
||||
#Loads config file using dual cache
|
||||
cache_exists = os.path.exists(self.cachefile)
|
||||
yaml_time = os.path.getmtime(conf) if os.path.exists(conf) else 0
|
||||
cache_time = os.path.getmtime(self.cachefile) if cache_exists else 0
|
||||
|
||||
if not cache_exists or yaml_time > cache_time:
|
||||
with open(conf, 'r') as f:
|
||||
data = yaml.safe_load(f)
|
||||
if not self._validate_config(data):
|
||||
# YAML is broken, try to recover from cache
|
||||
if cache_exists:
|
||||
printer.warning("Config file appears corrupt, recovering from cache...")
|
||||
with open(self.cachefile, 'r') as f:
|
||||
data = json.load(f)
|
||||
if self._validate_config(data):
|
||||
# Re-write the YAML from good cache
|
||||
with open(conf, 'w') as f:
|
||||
yaml.dump(data, f, Dumper=NoAliasDumper, default_flow_style=False, sort_keys=False)
|
||||
return data
|
||||
# Both broken or no cache - create fresh
|
||||
printer.error("Config file is corrupt and no valid cache exists. Creating default config.")
|
||||
return self._createconfig(conf)
|
||||
try:
|
||||
with open(self.cachefile, 'w') as f:
|
||||
json.dump(data, f)
|
||||
except Exception:
|
||||
pass
|
||||
return data
|
||||
else:
|
||||
with open(self.cachefile, 'r') as f:
|
||||
data = json.load(f)
|
||||
if not self._validate_config(data):
|
||||
# Cache broken, try yaml
|
||||
with open(conf, 'r') as f:
|
||||
data = yaml.safe_load(f)
|
||||
if self._validate_config(data):
|
||||
return data
|
||||
# Both broken
|
||||
printer.error("Both config and cache are corrupt. Creating default config.")
|
||||
return self._createconfig(conf)
|
||||
return data
|
||||
#Loads config file
|
||||
jsonconf = open(conf)
|
||||
jsondata = json.load(jsonconf)
|
||||
jsonconf.close()
|
||||
return jsondata
|
||||
|
||||
def _createconfig(self, conf):
|
||||
#Create config file (always writes defaults, safe for recovery)
|
||||
#Create config file
|
||||
defaultconfig = {'config': {'case': False, 'idletime': 30, 'fzf': False}, 'connections': {}, 'profiles': { "default": { "host":"", "protocol":"ssh", "port":"", "user":"", "password":"", "options":"", "logs":"", "tags": "", "jumphost":""}}}
|
||||
with open(conf, "w") as f:
|
||||
yaml.dump(defaultconfig, f, Dumper=NoAliasDumper, default_flow_style=False, sort_keys=False)
|
||||
os.chmod(conf, 0o600)
|
||||
try:
|
||||
with open(self.cachefile, 'w') as f:
|
||||
json.dump(defaultconfig, f)
|
||||
except Exception:
|
||||
pass
|
||||
return defaultconfig
|
||||
if not os.path.exists(conf):
|
||||
with open(conf, "w") as f:
|
||||
json.dump(defaultconfig, f, indent = 4)
|
||||
f.close()
|
||||
os.chmod(conf, 0o600)
|
||||
jsonconf = open(conf)
|
||||
jsondata = json.load(jsonconf)
|
||||
jsonconf.close()
|
||||
return jsondata
|
||||
|
||||
@MethodHook
|
||||
def _saveconfig(self, conf):
|
||||
#Save config file atomically to prevent corruption
|
||||
#Save config file
|
||||
newconfig = {"config":{}, "connections": {}, "profiles": {}}
|
||||
newconfig["config"] = self.config
|
||||
newconfig["connections"] = self.connections
|
||||
newconfig["profiles"] = self.profiles
|
||||
tmpfile = conf + '.tmp'
|
||||
try:
|
||||
with open(tmpfile, "w") as f:
|
||||
yaml.dump(newconfig, f, Dumper=NoAliasDumper, default_flow_style=False, sort_keys=False)
|
||||
# Atomic replace: only overwrite original if write succeeded
|
||||
shutil.move(tmpfile, conf)
|
||||
with open(self.cachefile, "w") as f:
|
||||
json.dump(newconfig, f)
|
||||
self._generate_nodes_cache()
|
||||
except (IOError, OSError) as e:
|
||||
printer.error(f"Failed to save config: {e}")
|
||||
# Clean up temp file if it exists
|
||||
if os.path.exists(tmpfile):
|
||||
try:
|
||||
os.remove(tmpfile)
|
||||
except OSError:
|
||||
pass
|
||||
with open(conf, "w") as f:
|
||||
json.dump(newconfig, f, indent = 4)
|
||||
f.close()
|
||||
except:
|
||||
return 1
|
||||
return 0
|
||||
|
||||
def _generate_nodes_cache(self, nodes=None, folders=None, profiles=None):
|
||||
try:
|
||||
if nodes is None:
|
||||
nodes = self._getallnodes()
|
||||
if folders is None:
|
||||
folders = self._getallfolders()
|
||||
if profiles is None:
|
||||
profiles = list(self.profiles.keys())
|
||||
|
||||
with open(self.fzf_cachefile, "w") as f:
|
||||
f.write("\n".join(nodes))
|
||||
with open(self.folders_cachefile, "w") as f:
|
||||
f.write("\n".join(folders))
|
||||
with open(self.profiles_cachefile, "w") as f:
|
||||
f.write("\n".join(profiles))
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
def _createkey(self, keyfile):
|
||||
#Create key file
|
||||
key = RSA.generate(2048)
|
||||
@@ -289,7 +155,7 @@ class configfile:
|
||||
return result
|
||||
|
||||
@MethodHook
|
||||
def getitem(self, unique, keys = None, extract = False):
|
||||
def getitem(self, unique, keys = None):
|
||||
'''
|
||||
Get an node or a group of nodes from configfile which can be passed to node/nodes class
|
||||
|
||||
@@ -303,8 +169,6 @@ class configfile:
|
||||
|
||||
- keys (list): In case you pass a folder as unique, you can filter
|
||||
nodes inside the folder passing a list.
|
||||
- extract (bool): If True, extract information from profiles.
|
||||
Default False.
|
||||
|
||||
### Returns:
|
||||
|
||||
@@ -320,35 +184,21 @@ class configfile:
|
||||
folder = self.connections[uniques["folder"]]
|
||||
newfolder = deepcopy(folder)
|
||||
newfolder.pop("type")
|
||||
for node_name in folder.keys():
|
||||
if node_name == "type":
|
||||
for node in folder.keys():
|
||||
if node == "type":
|
||||
continue
|
||||
if "type" in newfolder[node_name].keys():
|
||||
if newfolder[node_name]["type"] == "subfolder":
|
||||
newfolder.pop(node_name)
|
||||
if "type" in newfolder[node].keys():
|
||||
if newfolder[node]["type"] == "subfolder":
|
||||
newfolder.pop(node)
|
||||
else:
|
||||
newfolder[node_name].pop("type")
|
||||
|
||||
if keys != None:
|
||||
newfolder = dict((k, newfolder[k]) for k in keys)
|
||||
|
||||
if extract:
|
||||
for node_name, node_keys in newfolder.items():
|
||||
for key, value in node_keys.items():
|
||||
profile = re.search("^@(.*)", str(value))
|
||||
if profile:
|
||||
try:
|
||||
newfolder[node_name][key] = self.profiles[profile.group(1)][key]
|
||||
except KeyError:
|
||||
newfolder[node_name][key] = ""
|
||||
elif value == '' and key == "protocol":
|
||||
try:
|
||||
newfolder[node_name][key] = self.profiles["default"][key]
|
||||
except KeyError:
|
||||
newfolder[node_name][key] = "ssh"
|
||||
|
||||
newfolder = {"{}{}".format(k,unique):v for k,v in newfolder.items()}
|
||||
return newfolder
|
||||
newfolder[node].pop("type")
|
||||
if keys == None:
|
||||
newfolder = {"{}{}".format(k,unique):v for k,v in newfolder.items()}
|
||||
return newfolder
|
||||
else:
|
||||
f_newfolder = dict((k, newfolder[k]) for k in keys)
|
||||
f_newfolder = {"{}{}".format(k,unique):v for k,v in f_newfolder.items()}
|
||||
return f_newfolder
|
||||
else:
|
||||
if uniques.keys() >= {"folder", "subfolder"}:
|
||||
node = self.connections[uniques["folder"]][uniques["subfolder"]][uniques["id"]]
|
||||
@@ -358,24 +208,10 @@ class configfile:
|
||||
node = self.connections[uniques["id"]]
|
||||
newnode = deepcopy(node)
|
||||
newnode.pop("type")
|
||||
|
||||
if extract:
|
||||
for key, value in newnode.items():
|
||||
profile = re.search("^@(.*)", str(value))
|
||||
if profile:
|
||||
try:
|
||||
newnode[key] = self.profiles[profile.group(1)][key]
|
||||
except KeyError:
|
||||
newnode[key] = ""
|
||||
elif value == '' and key == "protocol":
|
||||
try:
|
||||
newnode[key] = self.profiles["default"][key]
|
||||
except KeyError:
|
||||
newnode[key] = "ssh"
|
||||
return newnode
|
||||
|
||||
@MethodHook
|
||||
def getitems(self, uniques, extract = False):
|
||||
def getitems(self, uniques):
|
||||
'''
|
||||
Get a group of nodes from configfile which can be passed to node/nodes class
|
||||
|
||||
@@ -385,11 +221,6 @@ class configfile:
|
||||
from the connection manager. It can be a
|
||||
list of strings.
|
||||
|
||||
### Optional Parameters:
|
||||
|
||||
- extract (bool): If True, extract information from profiles.
|
||||
Default False.
|
||||
|
||||
### Returns:
|
||||
|
||||
dict: Dictionary containing information of node or multiple
|
||||
@@ -400,15 +231,23 @@ class configfile:
|
||||
if isinstance(uniques, str):
|
||||
uniques = [uniques]
|
||||
for i in uniques:
|
||||
if i.startswith("@"):
|
||||
if isinstance(i, dict):
|
||||
name = list(i.keys())[0]
|
||||
mylist = i[name]
|
||||
if not self.config["case"]:
|
||||
name = name.lower()
|
||||
mylist = [item.lower() for item in mylist]
|
||||
this = self.getitem(name, mylist)
|
||||
nodes.update(this)
|
||||
elif i.startswith("@"):
|
||||
if not self.config["case"]:
|
||||
i = i.lower()
|
||||
this = self.getitem(i, extract = extract)
|
||||
this = self.getitem(i)
|
||||
nodes.update(this)
|
||||
else:
|
||||
if not self.config["case"]:
|
||||
i = i.lower()
|
||||
this = self.getitem(i, extract = extract)
|
||||
this = self.getitem(i)
|
||||
nodes[i] = this
|
||||
return nodes
|
||||
|
||||
@@ -468,57 +307,48 @@ class configfile:
|
||||
def _getallnodes(self, filter = None):
|
||||
#get all nodes on configfile
|
||||
nodes = []
|
||||
layer1 = [k for k,v in self.connections.items() if isinstance(v, dict) and v.get("type") == "connection"]
|
||||
folders = [k for k,v in self.connections.items() if isinstance(v, dict) and v.get("type") == "folder"]
|
||||
layer1 = [k for k,v in self.connections.items() if isinstance(v, dict) and v["type"] == "connection"]
|
||||
folders = [k for k,v in self.connections.items() if isinstance(v, dict) and v["type"] == "folder"]
|
||||
nodes.extend(layer1)
|
||||
for f in folders:
|
||||
layer2 = [k + "@" + f for k,v in self.connections[f].items() if isinstance(v, dict) and v.get("type") == "connection"]
|
||||
layer2 = [k + "@" + f for k,v in self.connections[f].items() if isinstance(v, dict) and v["type"] == "connection"]
|
||||
nodes.extend(layer2)
|
||||
subfolders = [k for k,v in self.connections[f].items() if isinstance(v, dict) and v.get("type") == "subfolder"]
|
||||
subfolders = [k for k,v in self.connections[f].items() if isinstance(v, dict) and v["type"] == "subfolder"]
|
||||
for s in subfolders:
|
||||
layer3 = [k + "@" + s + "@" + f for k,v in self.connections[f][s].items() if isinstance(v, dict) and v.get("type") == "connection"]
|
||||
layer3 = [k + "@" + s + "@" + f for k,v in self.connections[f][s].items() if isinstance(v, dict) and v["type"] == "connection"]
|
||||
nodes.extend(layer3)
|
||||
if filter:
|
||||
flat_filter = []
|
||||
if isinstance(filter, str):
|
||||
flat_filter = [filter]
|
||||
nodes = [item for item in nodes if re.search(filter, item)]
|
||||
elif isinstance(filter, list):
|
||||
for item in filter:
|
||||
if isinstance(item, str):
|
||||
flat_filter.append(item)
|
||||
nodes = [item for item in nodes if any(re.search(pattern, item) for pattern in filter)]
|
||||
else:
|
||||
printer.error("Filter must be a string or a list of strings")
|
||||
sys.exit(1)
|
||||
nodes = [item for item in nodes if any(re.search(pattern, item) for pattern in flat_filter)]
|
||||
raise ValueError("filter must be a string or a list of strings")
|
||||
return nodes
|
||||
|
||||
@MethodHook
|
||||
def _getallnodesfull(self, filter = None, extract = True):
|
||||
#get all nodes on configfile with all their attributes.
|
||||
nodes = {}
|
||||
layer1 = {k:v for k,v in self.connections.items() if isinstance(v, dict) and v.get("type") == "connection"}
|
||||
folders = [k for k,v in self.connections.items() if isinstance(v, dict) and v.get("type") == "folder"]
|
||||
layer1 = {k:v for k,v in self.connections.items() if isinstance(v, dict) and v["type"] == "connection"}
|
||||
folders = [k for k,v in self.connections.items() if isinstance(v, dict) and v["type"] == "folder"]
|
||||
nodes.update(layer1)
|
||||
for f in folders:
|
||||
layer2 = {k + "@" + f:v for k,v in self.connections[f].items() if isinstance(v, dict) and v.get("type") == "connection"}
|
||||
layer2 = {k + "@" + f:v for k,v in self.connections[f].items() if isinstance(v, dict) and v["type"] == "connection"}
|
||||
nodes.update(layer2)
|
||||
subfolders = [k for k,v in self.connections[f].items() if isinstance(v, dict) and v.get("type") == "subfolder"]
|
||||
subfolders = [k for k,v in self.connections[f].items() if isinstance(v, dict) and v["type"] == "subfolder"]
|
||||
for s in subfolders:
|
||||
layer3 = {k + "@" + s + "@" + f:v for k,v in self.connections[f][s].items() if isinstance(v, dict) and v.get("type") == "connection"}
|
||||
layer3 = {k + "@" + s + "@" + f:v for k,v in self.connections[f][s].items() if isinstance(v, dict) and v["type"] == "connection"}
|
||||
nodes.update(layer3)
|
||||
if filter:
|
||||
flat_filter = []
|
||||
if isinstance(filter, str):
|
||||
flat_filter = [filter]
|
||||
filter = "^(?!.*@).+$" if filter == "@" else filter
|
||||
nodes = {k: v for k, v in nodes.items() if re.search(filter, k)}
|
||||
elif isinstance(filter, list):
|
||||
for item in filter:
|
||||
if isinstance(item, str):
|
||||
flat_filter.append(item)
|
||||
filter = ["^(?!.*@).+$" if item == "@" else item for item in filter]
|
||||
nodes = {k: v for k, v in nodes.items() if any(re.search(pattern, k) for pattern in filter)}
|
||||
else:
|
||||
printer.error("Filter must be a string or a list of strings")
|
||||
sys.exit(1)
|
||||
flat_filter = ["^(?!.*@).+$" if item == "@" else item for item in flat_filter]
|
||||
nodes = {k: v for k, v in nodes.items() if any(re.search(pattern, k) for pattern in flat_filter)}
|
||||
raise ValueError("filter must be a string or a list of strings")
|
||||
if extract:
|
||||
for node, keys in nodes.items():
|
||||
for key, value in keys.items():
|
||||
@@ -526,12 +356,12 @@ class configfile:
|
||||
if profile:
|
||||
try:
|
||||
nodes[node][key] = self.profiles[profile.group(1)][key]
|
||||
except KeyError:
|
||||
except:
|
||||
nodes[node][key] = ""
|
||||
elif value == '' and key == "protocol":
|
||||
try:
|
||||
nodes[node][key] = self.profiles["default"][key]
|
||||
except KeyError:
|
||||
nodes[node][key] = config.profiles["default"][key]
|
||||
except:
|
||||
nodes[node][key] = "ssh"
|
||||
return nodes
|
||||
|
||||
@@ -539,27 +369,27 @@ class configfile:
|
||||
@MethodHook
|
||||
def _getallfolders(self):
|
||||
#get all folders on configfile
|
||||
folders = ["@" + k for k,v in self.connections.items() if isinstance(v, dict) and v.get("type") == "folder"]
|
||||
folders = ["@" + k for k,v in self.connections.items() if isinstance(v, dict) and v["type"] == "folder"]
|
||||
subfolders = []
|
||||
for f in folders:
|
||||
s = ["@" + k + f for k,v in self.connections[f[1:]].items() if isinstance(v, dict) and v.get("type") == "subfolder"]
|
||||
s = ["@" + k + f for k,v in self.connections[f[1:]].items() if isinstance(v, dict) and v["type"] == "subfolder"]
|
||||
subfolders.extend(s)
|
||||
folders.extend(subfolders)
|
||||
return folders
|
||||
|
||||
@MethodHook
|
||||
def _profileused(self, profile):
|
||||
#Return all the nodes that uses this profile.
|
||||
#Check if profile is used before deleting it
|
||||
nodes = []
|
||||
layer1 = [k for k,v in self.connections.items() if isinstance(v, dict) and v.get("type") == "connection" and ("@" + profile in v.values() or ( isinstance(v.get("password"),list) and "@" + profile in v.get("password")))]
|
||||
folders = [k for k,v in self.connections.items() if isinstance(v, dict) and v.get("type") == "folder"]
|
||||
layer1 = [k for k,v in self.connections.items() if isinstance(v, dict) and v["type"] == "connection" and ("@" + profile in v.values() or ( isinstance(v["password"],list) and "@" + profile in v["password"]))]
|
||||
folders = [k for k,v in self.connections.items() if isinstance(v, dict) and v["type"] == "folder"]
|
||||
nodes.extend(layer1)
|
||||
for f in folders:
|
||||
layer2 = [k + "@" + f for k,v in self.connections[f].items() if isinstance(v, dict) and v.get("type") == "connection" and ("@" + profile in v.values() or ( isinstance(v.get("password"),list) and "@" + profile in v.get("password")))]
|
||||
layer2 = [k + "@" + f for k,v in self.connections[f].items() if isinstance(v, dict) and v["type"] == "connection" and ("@" + profile in v.values() or ( isinstance(v["password"],list) and "@" + profile in v["password"]))]
|
||||
nodes.extend(layer2)
|
||||
subfolders = [k for k,v in self.connections[f].items() if isinstance(v, dict) and v.get("type") == "subfolder"]
|
||||
subfolders = [k for k,v in self.connections[f].items() if isinstance(v, dict) and v["type"] == "subfolder"]
|
||||
for s in subfolders:
|
||||
layer3 = [k + "@" + s + "@" + f for k,v in self.connections[f][s].items() if isinstance(v, dict) and v.get("type") == "connection" and ("@" + profile in v.values() or ( isinstance(v.get("password"),list) and "@" + profile in v.get("password")))]
|
||||
layer3 = [k + "@" + s + "@" + f for k,v in self.connections[f][s].items() if isinstance(v, dict) and v["type"] == "connection" and ("@" + profile in v.values() or ( isinstance(v["password"],list) and "@" + profile in v["password"]))]
|
||||
nodes.extend(layer3)
|
||||
return nodes
|
||||
|
||||
|
||||
+1417
-405
File diff suppressed because it is too large
Load Diff
+140
-575
@@ -14,11 +14,6 @@ from pathlib import Path
|
||||
from copy import deepcopy
|
||||
from .hooks import ClassHook, MethodHook
|
||||
import io
|
||||
import asyncio
|
||||
import fcntl
|
||||
from . import printer
|
||||
from .tunnels import LocalStream
|
||||
|
||||
|
||||
#functions and classes
|
||||
@ClassHook
|
||||
@@ -33,7 +28,7 @@ class node:
|
||||
- result(bool): True if expected value is found after running
|
||||
the commands using test method.
|
||||
|
||||
- status (int): 0 if the method run or test run successfully.
|
||||
- status (int): 0 if the method run or test run succesfully.
|
||||
1 if connection failed.
|
||||
2 if expect timeouts without prompt or EOF.
|
||||
|
||||
@@ -62,7 +57,7 @@ class node:
|
||||
- port (str): Port to connect to node, default 22 for ssh and 23
|
||||
for telnet.
|
||||
|
||||
- protocol (str): Select ssh, telnet, kubectl or docker. Default is ssh.
|
||||
- protocol (str): Select ssh or telnet. Default is ssh.
|
||||
|
||||
- user (str): Username to of the node.
|
||||
|
||||
@@ -88,12 +83,12 @@ class node:
|
||||
if profile and config != '':
|
||||
try:
|
||||
setattr(self,key,config.profiles[profile.group(1)][key])
|
||||
except KeyError:
|
||||
except:
|
||||
setattr(self,key,"")
|
||||
elif attr[key] == '' and key == "protocol":
|
||||
try:
|
||||
setattr(self,key,config.profiles["default"][key])
|
||||
except (KeyError, AttributeError):
|
||||
except:
|
||||
setattr(self,key,"ssh")
|
||||
else:
|
||||
setattr(self,key,attr[key])
|
||||
@@ -103,8 +98,6 @@ class node:
|
||||
profile = re.search("^@(.*)", password[i])
|
||||
if profile and config != '':
|
||||
self.password.append(config.profiles[profile.group(1)]["password"])
|
||||
else:
|
||||
self.password.append(password[i])
|
||||
else:
|
||||
self.password = [password]
|
||||
if self.jumphost != "" and config != '':
|
||||
@@ -114,12 +107,12 @@ class node:
|
||||
if profile:
|
||||
try:
|
||||
self.jumphost[key] = config.profiles[profile.group(1)][key]
|
||||
except KeyError:
|
||||
except:
|
||||
self.jumphost[key] = ""
|
||||
elif self.jumphost[key] == '' and key == "protocol":
|
||||
try:
|
||||
self.jumphost[key] = config.profiles["default"][key]
|
||||
except KeyError:
|
||||
except:
|
||||
self.jumphost[key] = "ssh"
|
||||
if isinstance(self.jumphost["password"],list):
|
||||
jumphost_password = []
|
||||
@@ -127,8 +120,6 @@ class node:
|
||||
profile = re.search("^@(.*)", self.jumphost["password"][i])
|
||||
if profile:
|
||||
jumphost_password.append(config.profiles[profile.group(1)]["password"])
|
||||
else:
|
||||
jumphost_password.append(self.jumphost["password"][i])
|
||||
self.jumphost["password"] = jumphost_password
|
||||
else:
|
||||
self.jumphost["password"] = [self.jumphost["password"]]
|
||||
@@ -146,48 +137,8 @@ class node:
|
||||
else:
|
||||
jumphost_cmd = jumphost_cmd + " {}".format("@".join([self.jumphost["user"],self.jumphost["host"]]))
|
||||
self.jumphost = f"-o ProxyCommand=\"{jumphost_cmd}\""
|
||||
elif self.jumphost["protocol"] == "ssm":
|
||||
ssm_target = self.jumphost["host"]
|
||||
ssm_cmd = f"aws ssm start-session --target {ssm_target} --document-name AWS-StartSSHSession --parameters 'portNumber=22'"
|
||||
if isinstance(self.jumphost.get("tags"), dict):
|
||||
if "profile" in self.jumphost["tags"]:
|
||||
ssm_cmd += f" --profile {self.jumphost['tags']['profile']}"
|
||||
if "region" in self.jumphost["tags"]:
|
||||
ssm_cmd += f" --region {self.jumphost['tags']['region']}"
|
||||
if self.jumphost["options"] != '':
|
||||
ssm_cmd += f" {self.jumphost['options']}"
|
||||
|
||||
bastion_user_part = f"{self.jumphost['user']}@{ssm_target}" if self.jumphost['user'] else ssm_target
|
||||
|
||||
ssh_opts = ""
|
||||
if isinstance(self.jumphost.get("tags"), dict) and "ssh_options" in self.jumphost["tags"]:
|
||||
ssh_opts = f" {self.jumphost['tags']['ssh_options']}"
|
||||
|
||||
inner_ssh = f"ssh{ssh_opts} -o ProxyCommand='{ssm_cmd}' -W %h:%p {bastion_user_part}"
|
||||
self.jumphost = f"-o ProxyCommand=\"{inner_ssh}\""
|
||||
elif self.jumphost["protocol"] in ["kubectl", "docker"]:
|
||||
nc_cmd = "nc"
|
||||
if isinstance(self.jumphost.get("tags"), dict) and "nc_command" in self.jumphost["tags"]:
|
||||
nc_cmd = self.jumphost["tags"]["nc_command"]
|
||||
|
||||
if self.jumphost["protocol"] == "kubectl":
|
||||
proxy_cmd = f"kubectl exec "
|
||||
if self.jumphost["options"] != '':
|
||||
proxy_cmd += f"{self.jumphost['options']} "
|
||||
proxy_cmd += f"{self.jumphost['host']} -i -- {nc_cmd} %h %p"
|
||||
else:
|
||||
proxy_cmd = f"docker "
|
||||
if self.jumphost["options"] != '':
|
||||
proxy_cmd += f"{self.jumphost['options']} "
|
||||
proxy_cmd += f"exec -i {self.jumphost['host']} {nc_cmd} %h %p"
|
||||
|
||||
self.jumphost = f"-o ProxyCommand=\"{proxy_cmd}\""
|
||||
else:
|
||||
self.jumphost = ""
|
||||
|
||||
self.output = ""
|
||||
self.status = 1
|
||||
self.result = {}
|
||||
|
||||
@MethodHook
|
||||
def _passtx(self, passwords, *, keyfile=None):
|
||||
@@ -206,10 +157,8 @@ class node:
|
||||
try:
|
||||
decrypted = decryptor.decrypt(ast.literal_eval(passwd)).decode("utf-8")
|
||||
dpass.append(decrypted)
|
||||
except Exception:
|
||||
printer.error("Decryption failed: Missing or corrupted key.")
|
||||
printer.info("Verify your RSA key and configuration settings.")
|
||||
sys.exit(1)
|
||||
except:
|
||||
raise ValueError("Missing or corrupted key")
|
||||
return dpass
|
||||
|
||||
|
||||
@@ -232,54 +181,23 @@ class node:
|
||||
|
||||
@MethodHook
|
||||
def _logclean(self, logfile, var = False):
|
||||
# Remove special ascii characters and process terminal cursor movements to clean logs.
|
||||
#Remove special ascii characters and other stuff from logfile.
|
||||
if var == False:
|
||||
t = open(logfile, "r").read()
|
||||
else:
|
||||
t = logfile
|
||||
|
||||
lines = t.split('\n')
|
||||
cleaned_lines = []
|
||||
|
||||
# Regex to capture: ANSI sequences, control characters (\r, \b, etc), and plain text chunks
|
||||
token_re = re.compile(r'(\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/ ]*[@-~])|\r|\b|\x7f|[\x00-\x1F]|[^\x1B\r\b\x7f\x00-\x1F]+)')
|
||||
|
||||
for line in lines:
|
||||
buffer = []
|
||||
cursor = 0
|
||||
|
||||
for token in token_re.findall(line):
|
||||
if token == '\r':
|
||||
cursor = 0
|
||||
elif token in ('\b', '\x7f'):
|
||||
if cursor > 0:
|
||||
cursor -= 1
|
||||
elif token == '\x1B[D': # Left Arrow
|
||||
if cursor > 0:
|
||||
cursor -= 1
|
||||
elif token == '\x1B[C': # Right Arrow
|
||||
if cursor < len(buffer):
|
||||
cursor += 1
|
||||
elif token == '\x1B[K': # Clear to end of line
|
||||
buffer = buffer[:cursor]
|
||||
elif token.startswith('\x1B'):
|
||||
# Ignore other ANSI sequences (colors, etc)
|
||||
continue
|
||||
elif len(token) == 1 and ord(token) < 32:
|
||||
# Ignore other non-printable control chars
|
||||
continue
|
||||
else:
|
||||
# Regular printable text
|
||||
for char in token:
|
||||
if cursor == len(buffer):
|
||||
buffer.append(char)
|
||||
else:
|
||||
buffer[cursor] = char
|
||||
cursor += 1
|
||||
cleaned_lines.append("".join(buffer))
|
||||
|
||||
t = "\n".join(cleaned_lines).replace('\n\n', '\n').strip()
|
||||
|
||||
while t.find("\b") != -1:
|
||||
t = re.sub('[^\b]\b', '', t)
|
||||
t = t.replace("\n","",1)
|
||||
t = t.replace("\a","")
|
||||
t = t.replace('\n\n', '\n')
|
||||
t = re.sub(r'.\[K', '', t)
|
||||
ansi_escape = re.compile(r'\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/ ]*[@-~])')
|
||||
t = ansi_escape.sub('', t)
|
||||
t = t.lstrip(" \n\r")
|
||||
t = t.replace("\r","")
|
||||
t = t.replace("\x0E","")
|
||||
t = t.replace("\x0F","")
|
||||
if var == False:
|
||||
d = open(logfile, "w")
|
||||
d.write(t)
|
||||
@@ -322,206 +240,53 @@ class node:
|
||||
sleep(1)
|
||||
|
||||
|
||||
def _setup_interact_environment(self, debug=False, logger=None, async_mode=False):
|
||||
size = re.search('columns=([0-9]+).*lines=([0-9]+)',str(os.get_terminal_size()))
|
||||
self.child.setwinsize(int(size.group(2)),int(size.group(1)))
|
||||
if logger:
|
||||
port_str = f":{self.port}" if self.port and self.protocol not in ["ssm", "kubectl", "docker"] else ""
|
||||
logger("success", f"Connected to {self.unique} at {self.host}{port_str} via: {self.protocol}")
|
||||
@MethodHook
|
||||
def interact(self, debug = False):
|
||||
'''
|
||||
Allow user to interact with the node directly, mostly used by connection manager.
|
||||
|
||||
if 'logfile' in dir(self):
|
||||
# Initialize self.mylog
|
||||
if not 'mylog' in dir(self):
|
||||
self.mylog = io.BytesIO()
|
||||
if not async_mode:
|
||||
### Optional Parameters:
|
||||
|
||||
- debug (bool): If True, display all the connecting information
|
||||
before interact. Default False.
|
||||
'''
|
||||
connect = self._connect(debug = debug)
|
||||
if connect == True:
|
||||
size = re.search('columns=([0-9]+).*lines=([0-9]+)',str(os.get_terminal_size()))
|
||||
self.child.setwinsize(int(size.group(2)),int(size.group(1)))
|
||||
print("Connected to " + self.unique + " at " + self.host + (":" if self.port != '' else '') + self.port + " via: " + self.protocol)
|
||||
if 'logfile' in dir(self):
|
||||
# Initialize self.mylog
|
||||
if not 'mylog' in dir(self):
|
||||
self.mylog = io.BytesIO()
|
||||
self.child.logfile_read = self.mylog
|
||||
|
||||
# Start the _savelog thread
|
||||
log_thread = threading.Thread(target=self._savelog)
|
||||
log_thread.daemon = True
|
||||
log_thread.start()
|
||||
if 'missingtext' in dir(self):
|
||||
print(self.child.after.decode(), end='')
|
||||
if self.idletime > 0 and not async_mode:
|
||||
x = threading.Thread(target=self._keepalive)
|
||||
x.daemon = True
|
||||
x.start()
|
||||
if debug:
|
||||
if 'mylog' in dir(self):
|
||||
if not async_mode:
|
||||
print(self.mylog.getvalue().decode())
|
||||
if 'missingtext' in dir(self):
|
||||
print(self.child.after.decode(), end='')
|
||||
if self.idletime > 0:
|
||||
x = threading.Thread(target=self._keepalive)
|
||||
x.daemon = True
|
||||
x.start()
|
||||
if debug:
|
||||
print(self.mylog.getvalue().decode())
|
||||
self.child.interact(input_filter=self._filter)
|
||||
if 'logfile' in dir(self):
|
||||
with open(self.logfile, "w") as f:
|
||||
f.write(self._logclean(self.mylog.getvalue().decode(), True))
|
||||
|
||||
def _teardown_interact_environment(self):
|
||||
if 'logfile' in dir(self) and hasattr(self, 'mylog'):
|
||||
with open(self.logfile, "w") as f:
|
||||
f.write(self._logclean(self.mylog.getvalue().decode(), True))
|
||||
|
||||
async def _async_interact_loop(self, local_stream, resize_callback):
|
||||
local_stream.setup(resize_callback=resize_callback)
|
||||
try:
|
||||
child_fd = self.child.child_fd
|
||||
|
||||
# 1. Flush ghost buffer (Clean UX)
|
||||
ghost_buffer = b''
|
||||
if getattr(self, 'missingtext', False):
|
||||
# If we are missing the password, we MUST show the password prompt
|
||||
ghost_buffer = (self.child.after or b'') + (self.child.buffer or b'')
|
||||
else:
|
||||
# We auto-logged in. Hide the messy password negotiation and just keep any pending live stream.
|
||||
ghost_buffer = self.child.buffer or b''
|
||||
|
||||
# Fix user's pet peeve: Strip leading newlines to avoid the empty lines
|
||||
# the router echoes after receiving the password or blank line.
|
||||
if not getattr(self, 'missingtext', False):
|
||||
ghost_buffer = ghost_buffer.lstrip(b'\r\n ')
|
||||
|
||||
if ghost_buffer:
|
||||
# Add a single clean newline so it doesn't merge with the Connected message
|
||||
await local_stream.write(b'\r\n' + ghost_buffer)
|
||||
if hasattr(self, 'mylog'):
|
||||
self.mylog.write(b'\n' + ghost_buffer)
|
||||
|
||||
self.child.buffer = b''
|
||||
self.child.before = b''
|
||||
|
||||
# 2. Set child fd non-blocking
|
||||
flags = fcntl.fcntl(child_fd, fcntl.F_GETFL)
|
||||
fcntl.fcntl(child_fd, fcntl.F_SETFL, flags | os.O_NONBLOCK)
|
||||
|
||||
loop = asyncio.get_running_loop()
|
||||
child_reader_queue = asyncio.Queue()
|
||||
|
||||
def _child_read_ready():
|
||||
try:
|
||||
data = os.read(child_fd, 4096)
|
||||
if data:
|
||||
child_reader_queue.put_nowait(data)
|
||||
else:
|
||||
child_reader_queue.put_nowait(b'')
|
||||
except BlockingIOError:
|
||||
pass
|
||||
except OSError:
|
||||
child_reader_queue.put_nowait(b'')
|
||||
|
||||
loop.add_reader(child_fd, _child_read_ready)
|
||||
self.lastinput = time()
|
||||
|
||||
async def ingress_task():
|
||||
while True:
|
||||
data = await local_stream.read()
|
||||
if not data:
|
||||
break
|
||||
try:
|
||||
os.write(child_fd, data)
|
||||
except OSError:
|
||||
break
|
||||
self.lastinput = time()
|
||||
|
||||
async def egress_task():
|
||||
# Continue stripping newlines from the live stream until we hit real text
|
||||
skip_newlines = not getattr(self, 'missingtext', False) and not ghost_buffer
|
||||
while True:
|
||||
data = await child_reader_queue.get()
|
||||
if not data:
|
||||
break
|
||||
|
||||
if skip_newlines:
|
||||
stripped = data.lstrip(b'\r\n')
|
||||
if stripped:
|
||||
skip_newlines = False
|
||||
data = stripped
|
||||
else:
|
||||
continue
|
||||
|
||||
await local_stream.write(data)
|
||||
if hasattr(self, 'mylog'):
|
||||
self.mylog.write(data)
|
||||
|
||||
async def keepalive_task():
|
||||
while True:
|
||||
await asyncio.sleep(1)
|
||||
if time() - self.lastinput >= self.idletime:
|
||||
try:
|
||||
self.child.sendcontrol("e")
|
||||
self.lastinput = time()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
async def savelog_task():
|
||||
prev_size = 0
|
||||
while True:
|
||||
await asyncio.sleep(5)
|
||||
current_size = self.mylog.tell()
|
||||
if current_size != prev_size:
|
||||
try:
|
||||
with open(self.logfile, "w") as f:
|
||||
f.write(self._logclean(self.mylog.getvalue().decode(), True))
|
||||
prev_size = current_size
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
try:
|
||||
# gather runs until any task completes (or we just let them run until EOF breaks them)
|
||||
# Ingress breaks on user EOF. Egress breaks on child EOF.
|
||||
# We want to exit if either happens, so return_exceptions=False, but we need to cancel the others.
|
||||
tasks = [
|
||||
asyncio.create_task(ingress_task()),
|
||||
asyncio.create_task(egress_task())
|
||||
]
|
||||
if self.idletime > 0:
|
||||
tasks.append(asyncio.create_task(keepalive_task()))
|
||||
if hasattr(self, 'logfile') and hasattr(self, 'mylog'):
|
||||
tasks.append(asyncio.create_task(savelog_task()))
|
||||
done, pending = await asyncio.wait(tasks, return_when=asyncio.FIRST_COMPLETED)
|
||||
for p in pending:
|
||||
p.cancel()
|
||||
finally:
|
||||
loop.remove_reader(child_fd)
|
||||
try:
|
||||
flags = fcntl.fcntl(child_fd, fcntl.F_GETFL)
|
||||
fcntl.fcntl(child_fd, fcntl.F_SETFL, flags & ~os.O_NONBLOCK)
|
||||
except Exception:
|
||||
pass
|
||||
finally:
|
||||
local_stream.teardown()
|
||||
|
||||
|
||||
@MethodHook
|
||||
def interact(self, debug=False, logger=None):
|
||||
'''
|
||||
Asynchronous interactive session using Smart Tunnel architecture.
|
||||
Allows multiplexing I/O and handling SIGWINCH events locally without blocking.
|
||||
'''
|
||||
connect = self._connect(debug=debug, logger=logger)
|
||||
if connect == True:
|
||||
try:
|
||||
self._setup_interact_environment(debug=debug, logger=logger, async_mode=True)
|
||||
|
||||
local_stream = LocalStream()
|
||||
|
||||
def resize_callback(rows, cols):
|
||||
try:
|
||||
self.child.setwinsize(rows, cols)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
asyncio.run(self._async_interact_loop(local_stream, resize_callback))
|
||||
finally:
|
||||
self._teardown_interact_environment()
|
||||
else:
|
||||
if logger:
|
||||
logger("error", str(connect))
|
||||
else:
|
||||
printer.error(f"Connection failed: {str(connect)}")
|
||||
sys.exit(1)
|
||||
|
||||
print(connect)
|
||||
exit(1)
|
||||
|
||||
@MethodHook
|
||||
def run(self, commands, vars = None,*, folder = '', prompt = r'>$|#$|\$$|>.$|#.$|\$.$', stdout = False, timeout = 10, logger = None):
|
||||
def run(self, commands, vars = None,*, folder = '', prompt = r'>$|#$|\$$|>.$|#.$|\$.$', stdout = False, timeout = 10):
|
||||
'''
|
||||
Run a command or list of commands on the node and return the output.
|
||||
|
||||
|
||||
### Parameters:
|
||||
|
||||
- commands (str/list): Commands to run on the node. Should be
|
||||
@@ -558,25 +323,12 @@ class node:
|
||||
str: Output of the commands you ran on the node.
|
||||
|
||||
'''
|
||||
connect = self._connect(timeout = timeout, logger = logger)
|
||||
connect = self._connect(timeout = timeout)
|
||||
now = datetime.datetime.now().strftime('%Y-%m-%d_%H%M%S')
|
||||
if connect == True:
|
||||
if logger:
|
||||
port_str = f":{self.port}" if self.port and self.protocol not in ["ssm", "kubectl", "docker"] else ""
|
||||
logger("success", f"Connected to {self.unique} at {self.host}{port_str} via: {self.protocol}")
|
||||
|
||||
# Attempt to set the terminal size
|
||||
try:
|
||||
self.child.setwinsize(65535, 65535)
|
||||
except Exception:
|
||||
try:
|
||||
self.child.setwinsize(10000, 10000)
|
||||
except Exception:
|
||||
pass
|
||||
if "prompt" in self.tags:
|
||||
prompt = self.tags["prompt"]
|
||||
expects = [prompt, pexpect.EOF, pexpect.TIMEOUT]
|
||||
|
||||
output = ''
|
||||
status = ''
|
||||
if not isinstance(commands, list):
|
||||
@@ -587,12 +339,7 @@ class node:
|
||||
self.child.logfile_read = self.mylog
|
||||
for c in commands:
|
||||
if vars is not None:
|
||||
try:
|
||||
c = c.format(**vars)
|
||||
except KeyError as e:
|
||||
self.output = f"Error: Variable {e} not defined in task or inventory"
|
||||
self.status = 1
|
||||
return self.output
|
||||
c = c.format(**vars)
|
||||
result = self.child.expect(expects, timeout = timeout)
|
||||
self.child.sendline(c)
|
||||
if result == 2:
|
||||
@@ -601,8 +348,8 @@ class node:
|
||||
result = self.child.expect(expects, timeout = timeout)
|
||||
self.child.close()
|
||||
output = self._logclean(self.mylog.getvalue().decode(), True)
|
||||
if logger:
|
||||
logger("output", output)
|
||||
if stdout == True:
|
||||
print(output)
|
||||
if folder != '':
|
||||
with open(folder + "/" + self.unique + "_" + now + ".txt", "w") as f:
|
||||
f.write(output)
|
||||
@@ -616,21 +363,19 @@ class node:
|
||||
else:
|
||||
self.output = connect
|
||||
self.status = 1
|
||||
if logger:
|
||||
logger("error", f"Connection failed: {connect}")
|
||||
if stdout == True:
|
||||
print(connect)
|
||||
if folder != '':
|
||||
with open(folder + "/" + self.unique + "_" + now + ".txt", "w") as f:
|
||||
f.write(connect)
|
||||
|
||||
f.close()
|
||||
return connect
|
||||
|
||||
@MethodHook
|
||||
def test(self, commands, expected, vars = None,*, folder = '', prompt = r'>$|#$|\$$|>.$|#.$|\$.$', timeout = 10, logger = None):
|
||||
def test(self, commands, expected, vars = None,*, prompt = r'>$|#$|\$$|>.$|#.$|\$.$', timeout = 10):
|
||||
'''
|
||||
Run a command or list of commands on the node, then check if expected value appears on the output after the last command.
|
||||
|
||||
|
||||
### Parameters:
|
||||
|
||||
- commands (str/list): Commands to run on the node. Should be
|
||||
@@ -652,9 +397,6 @@ class node:
|
||||
|
||||
### Optional Named Parameters:
|
||||
|
||||
- folder (str): Path where output log should be stored, leave
|
||||
empty to not store logs.
|
||||
|
||||
- prompt (str): Prompt to be expected after a command is finished
|
||||
running. Usually linux uses ">" or EOF while
|
||||
routers use ">" or "#". The default value should
|
||||
@@ -669,25 +411,11 @@ class node:
|
||||
false if prompt is found before.
|
||||
|
||||
'''
|
||||
now = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
|
||||
connect = self._connect(timeout = timeout, logger = logger)
|
||||
connect = self._connect(timeout = timeout)
|
||||
if connect == True:
|
||||
if logger:
|
||||
port_str = f":{self.port}" if self.port and self.protocol not in ["ssm", "kubectl", "docker"] else ""
|
||||
logger("success", f"Connected to {self.unique} at {self.host}{port_str} via: {self.protocol}")
|
||||
|
||||
# Attempt to set the terminal size
|
||||
try:
|
||||
self.child.setwinsize(65535, 65535)
|
||||
except Exception:
|
||||
try:
|
||||
self.child.setwinsize(10000, 10000)
|
||||
except Exception:
|
||||
pass
|
||||
if "prompt" in self.tags:
|
||||
prompt = self.tags["prompt"]
|
||||
expects = [prompt, pexpect.EOF, pexpect.TIMEOUT]
|
||||
|
||||
output = ''
|
||||
if not isinstance(commands, list):
|
||||
commands = [commands]
|
||||
@@ -699,12 +427,7 @@ class node:
|
||||
self.child.logfile_read = self.mylog
|
||||
for c in commands:
|
||||
if vars is not None:
|
||||
try:
|
||||
c = c.format(**vars)
|
||||
except KeyError as e:
|
||||
self.output = f"Error: Variable {e} not defined in task or inventory"
|
||||
self.status = 1
|
||||
return self.output
|
||||
c = c.format(**vars)
|
||||
result = self.child.expect(expects, timeout = timeout)
|
||||
self.child.sendline(c)
|
||||
if result == 2:
|
||||
@@ -713,12 +436,6 @@ class node:
|
||||
result = self.child.expect(expects, timeout = timeout)
|
||||
self.child.close()
|
||||
output = self._logclean(self.mylog.getvalue().decode(), True)
|
||||
if logger:
|
||||
logger("output", output)
|
||||
if folder != '':
|
||||
with open(folder + "/" + self.unique + "_" + now + ".txt", "w") as f:
|
||||
f.write(output)
|
||||
f.close()
|
||||
self.output = output
|
||||
if result in [0, 1]:
|
||||
# lastcommand = commands[-1]
|
||||
@@ -751,198 +468,104 @@ class node:
|
||||
return connect
|
||||
|
||||
@MethodHook
|
||||
def _generate_ssh_sftp_cmd(self):
|
||||
cmd = self.protocol
|
||||
if self.idletime > 0:
|
||||
cmd += " -o ServerAliveInterval=" + str(self.idletime)
|
||||
if self.port:
|
||||
if self.protocol == "ssh":
|
||||
cmd += " -p " + self.port
|
||||
elif self.protocol == "sftp":
|
||||
cmd += " -P " + self.port
|
||||
if self.options:
|
||||
opts = self.options
|
||||
if self.protocol == "sftp":
|
||||
# Strip SSH-only flags that sftp doesn't support
|
||||
opts = re.sub(r'(?<!\S)-[XxtTAaNf]\b', '', opts).strip()
|
||||
if opts:
|
||||
cmd += " " + opts
|
||||
if self.jumphost:
|
||||
cmd += " " + self.jumphost
|
||||
user_host = f"{self.user}@{self.host}" if self.user else self.host
|
||||
cmd += f" {user_host}"
|
||||
return cmd
|
||||
|
||||
@MethodHook
|
||||
def _generate_telnet_cmd(self):
|
||||
cmd = f"telnet {self.host}"
|
||||
if self.port:
|
||||
cmd += f" {self.port}"
|
||||
if self.options:
|
||||
cmd += f" {self.options}"
|
||||
return cmd
|
||||
|
||||
@MethodHook
|
||||
def _generate_kube_cmd(self):
|
||||
cmd = f"kubectl exec {self.options} {self.host} -it --"
|
||||
kube_command = self.tags.get("kube_command", "/bin/bash") if isinstance(self.tags, dict) else "/bin/bash"
|
||||
cmd += f" {kube_command}"
|
||||
return cmd
|
||||
|
||||
@MethodHook
|
||||
def _generate_docker_cmd(self):
|
||||
cmd = f"docker {self.options} exec -it {self.host}"
|
||||
docker_command = self.tags.get("docker_command", "/bin/bash") if isinstance(self.tags, dict) else "/bin/bash"
|
||||
cmd += f" {docker_command}"
|
||||
return cmd
|
||||
|
||||
@MethodHook
|
||||
def _generate_ssm_cmd(self):
|
||||
region = self.tags.get("region", "") if isinstance(self.tags, dict) else ""
|
||||
profile = self.tags.get("profile", "") if isinstance(self.tags, dict) else ""
|
||||
cmd = f"aws ssm start-session --target {self.host}"
|
||||
if region:
|
||||
cmd += f" --region {region}"
|
||||
if profile:
|
||||
cmd += f" --profile {profile}"
|
||||
if self.options:
|
||||
cmd += f" {self.options}"
|
||||
return cmd
|
||||
|
||||
@MethodHook
|
||||
def _get_cmd(self):
|
||||
def _connect(self, debug = False, timeout = 10, max_attempts = 3):
|
||||
# Method to connect to the node, it parse all the information, create the ssh/telnet command and login to the node.
|
||||
if self.protocol in ["ssh", "sftp"]:
|
||||
return self._generate_ssh_sftp_cmd()
|
||||
cmd = self.protocol
|
||||
if self.idletime > 0:
|
||||
cmd = cmd + " -o ServerAliveInterval=" + str(self.idletime)
|
||||
if self.port != '':
|
||||
if self.protocol == "ssh":
|
||||
cmd = cmd + " -p " + self.port
|
||||
elif self.protocol == "sftp":
|
||||
cmd = cmd + " -P " + self.port
|
||||
if self.options != '':
|
||||
cmd = cmd + " " + self.options
|
||||
if self.logs != '':
|
||||
self.logfile = self._logfile()
|
||||
if self.jumphost != '':
|
||||
cmd = cmd + " " + self.jumphost
|
||||
if self.password[0] != '':
|
||||
passwords = self._passtx(self.password)
|
||||
else:
|
||||
passwords = []
|
||||
if self.user == '':
|
||||
cmd = cmd + " {}".format(self.host)
|
||||
else:
|
||||
cmd = cmd + " {}".format("@".join([self.user,self.host]))
|
||||
expects = ['yes/no', 'refused', 'supported', 'Invalid|[u|U]sage: (ssh|sftp)', 'ssh-keygen.*\"', 'timeout|timed.out', 'unavailable', 'closed', '[p|P]assword:|[u|U]sername:', r'>$|#$|\$$|>.$|#.$|\$.$', 'suspend', pexpect.EOF, pexpect.TIMEOUT, "No route to host", "resolve hostname", "no matching", "[b|B]ad (owner|permissions)"]
|
||||
elif self.protocol == "telnet":
|
||||
return self._generate_telnet_cmd()
|
||||
elif self.protocol == "kubectl":
|
||||
return self._generate_kube_cmd()
|
||||
elif self.protocol == "docker":
|
||||
return self._generate_docker_cmd()
|
||||
elif self.protocol == "ssm":
|
||||
return self._generate_ssm_cmd()
|
||||
cmd = "telnet " + self.host
|
||||
if self.port != '':
|
||||
cmd = cmd + " " + self.port
|
||||
if self.options != '':
|
||||
cmd = cmd + " " + self.options
|
||||
if self.logs != '':
|
||||
self.logfile = self._logfile()
|
||||
if self.password[0] != '':
|
||||
passwords = self._passtx(self.password)
|
||||
else:
|
||||
passwords = []
|
||||
expects = ['[u|U]sername:', 'refused', 'supported', 'invalid option', 'ssh-keygen.*\"', 'timeout|timed.out', 'unavailable', 'closed', '[p|P]assword:', r'>$|#$|\$$|>.$|#.$|\$.$', 'suspend', pexpect.EOF, pexpect.TIMEOUT, "No route to host", "resolve hostname", "no matching", "[b|B]ad (owner|permissions)"]
|
||||
else:
|
||||
printer.error(f"Invalid protocol: {self.protocol}")
|
||||
sys.exit(1)
|
||||
|
||||
@MethodHook
|
||||
def _connect(self, debug=False, timeout=10, max_attempts=3, logger=None):
|
||||
|
||||
cmd = self._get_cmd()
|
||||
passwords = self._passtx(self.password) if self.password and any(self.password) else []
|
||||
if self.logs != '':
|
||||
self.logfile = self._logfile()
|
||||
default_prompt = r'>$|#$|\$$|>.$|#.$|\$.$'
|
||||
prompt = self.tags.get("prompt", default_prompt) if isinstance(self.tags, dict) else default_prompt
|
||||
password_prompt = '[p|P]assword:|[u|U]sername:' if self.protocol != 'telnet' else '[p|P]assword:'
|
||||
|
||||
expects = {
|
||||
"ssh": ['yes/no', 'refused', 'supported', 'Invalid|[u|U]sage: ssh', 'ssh-keygen.*\"', 'timeout|timed.out', 'unavailable', 'closed', password_prompt, prompt, 'suspend', pexpect.EOF, pexpect.TIMEOUT, "No route to host", "resolve hostname", "no matching", "[b|B]ad (owner|permissions)"],
|
||||
"sftp": ['yes/no', 'refused', 'supported', 'Invalid|[u|U]sage: sftp', 'ssh-keygen.*\"', 'timeout|timed.out', 'unavailable', 'closed', password_prompt, prompt, 'suspend', pexpect.EOF, pexpect.TIMEOUT, "No route to host", "resolve hostname", "no matching", "[b|B]ad (owner|permissions)"],
|
||||
"telnet": ['[u|U]sername:', 'refused', 'supported', 'invalid|unrecognized option', 'ssh-keygen.*\"', 'timeout|timed.out', 'unavailable', 'closed', password_prompt, prompt, 'suspend', pexpect.EOF, pexpect.TIMEOUT, "No route to host", "resolve hostname", "no matching", "[b|B]ad (owner|permissions)"],
|
||||
"kubectl": ['[u|U]sername:', '[r|R]efused', '[E|e]rror', 'DEPRECATED', pexpect.TIMEOUT, password_prompt, prompt, pexpect.EOF, "expired|invalid"],
|
||||
"docker": ['[u|U]sername:', 'Cannot', '[E|e]rror', 'failed', 'not a docker command', 'unknown', 'unable to resolve', pexpect.TIMEOUT, password_prompt, prompt, pexpect.EOF],
|
||||
"ssm": ['[u|U]sername:', 'Cannot', '[E|e]rror', 'failed', 'SessionManagerPlugin', '[u|U]nknown', 'unable to resolve', pexpect.TIMEOUT, password_prompt, prompt, pexpect.EOF]
|
||||
}
|
||||
|
||||
error_indices = {
|
||||
"ssh": [1, 2, 3, 4, 5, 6, 7, 12, 13, 14, 15, 16],
|
||||
"sftp": [1, 2, 3, 4, 5, 6, 7, 12, 13, 14, 15, 16],
|
||||
"telnet": [1, 2, 3, 4, 5, 6, 7, 12, 13, 14, 15, 16],
|
||||
"kubectl": [1, 2, 3, 4, 8], # Define error indices for kube
|
||||
"docker": [1, 2, 3, 4, 5, 6, 7], # Define error indices for docker
|
||||
"ssm": [1, 2, 3, 4, 5, 6, 7]
|
||||
}
|
||||
|
||||
eof_indices = {
|
||||
"ssh": [8, 9, 10, 11],
|
||||
"sftp": [8, 9, 10, 11],
|
||||
"telnet": [8, 9, 10, 11],
|
||||
"kubectl": [5, 6, 7], # Define eof indices for kube
|
||||
"docker": [8, 9, 10], # Define eof indices for docker
|
||||
"ssm": [8, 9, 10]
|
||||
}
|
||||
|
||||
initial_indices = {
|
||||
"ssh": [0],
|
||||
"sftp": [0],
|
||||
"telnet": [0],
|
||||
"kubectl": [0], # Define special indices for kube
|
||||
"docker": [0], # Define special indices for docker
|
||||
"ssm": [0]
|
||||
}
|
||||
|
||||
raise ValueError("Invalid protocol: " + self.protocol)
|
||||
attempts = 1
|
||||
while attempts <= max_attempts:
|
||||
child = pexpect.spawn(cmd)
|
||||
if isinstance(self.tags, dict) and self.tags.get("console"):
|
||||
child.sendline()
|
||||
if debug:
|
||||
if logger:
|
||||
logger("debug", f"Command:\n{cmd}")
|
||||
print(cmd)
|
||||
self.mylog = io.BytesIO()
|
||||
self.mylog.write(f"[i] [DEBUG] Command:\r\n {cmd}\r\n".encode())
|
||||
child.logfile_read = self.mylog
|
||||
|
||||
|
||||
if len(passwords) > 0:
|
||||
loops = len(passwords)
|
||||
else:
|
||||
loops = 1
|
||||
endloop = False
|
||||
for i in range(len(passwords) if passwords else 1):
|
||||
for i in range(0, loops):
|
||||
while True:
|
||||
results = child.expect(expects[self.protocol], timeout=timeout)
|
||||
results_value = expects[self.protocol][results]
|
||||
|
||||
if results in initial_indices[self.protocol]:
|
||||
results = child.expect(expects, timeout=timeout)
|
||||
if results == 0:
|
||||
if self.protocol in ["ssh", "sftp"]:
|
||||
child.sendline('yes')
|
||||
elif self.protocol in ["telnet", "kubectl", "docker", "ssm"]:
|
||||
if self.user:
|
||||
elif self.protocol == "telnet":
|
||||
if self.user != '':
|
||||
child.sendline(self.user)
|
||||
else:
|
||||
self.missingtext = True
|
||||
break
|
||||
|
||||
elif results in error_indices[self.protocol]:
|
||||
if results in [1, 2, 3, 4, 5, 6, 7, 12, 13, 14, 15, 16]:
|
||||
child.terminate()
|
||||
if results_value == pexpect.TIMEOUT and attempts != max_attempts:
|
||||
if results == 12 and attempts != max_attempts:
|
||||
attempts += 1
|
||||
endloop = True
|
||||
break
|
||||
else:
|
||||
after = "Connection timeout" if results_value == pexpect.TIMEOUT else child.after.decode()
|
||||
return f"Connection failed code: {results}\n{child.before.decode().lstrip()}{after}{child.readline().decode()}".rstrip()
|
||||
|
||||
elif results in eof_indices[self.protocol]:
|
||||
if results_value == password_prompt:
|
||||
if passwords:
|
||||
child.sendline(passwords[i])
|
||||
if results == 12:
|
||||
after = "Connection timeout"
|
||||
else:
|
||||
self.missingtext = True
|
||||
break
|
||||
elif results_value == "suspend":
|
||||
child.sendline("\r")
|
||||
sleep(2)
|
||||
after = child.after.decode()
|
||||
return ("Connection failed code:" + str(results) + "\n" + child.before.decode().lstrip() + after + child.readline().decode()).rstrip()
|
||||
if results == 8:
|
||||
if len(passwords) > 0:
|
||||
child.sendline(passwords[i])
|
||||
else:
|
||||
endloop = True
|
||||
child.sendline()
|
||||
break
|
||||
|
||||
self.missingtext = True
|
||||
break
|
||||
if results in [9, 11]:
|
||||
endloop = True
|
||||
child.sendline()
|
||||
break
|
||||
if results == 10:
|
||||
child.sendline("\r")
|
||||
sleep(2)
|
||||
if endloop:
|
||||
break
|
||||
if results_value == pexpect.TIMEOUT:
|
||||
if results == 12:
|
||||
continue
|
||||
else:
|
||||
break
|
||||
|
||||
if isinstance(self.tags, dict) and self.tags.get("post_connect_commands"):
|
||||
cmds = self.tags.get("post_connect_commands")
|
||||
commands = [cmds] if isinstance(cmds, str) else cmds
|
||||
for command in commands:
|
||||
child.sendline(command)
|
||||
sleep(1)
|
||||
child.readline(0)
|
||||
self.child = child
|
||||
from pexpect import fdpexpect
|
||||
self.raw_child = fdpexpect.fdspawn(self.child.child_fd)
|
||||
return True
|
||||
|
||||
@ClassHook
|
||||
@@ -964,7 +587,7 @@ class nodes:
|
||||
Created after running method test.
|
||||
|
||||
- status (dict): Dictionary formed by nodes unique as keys, value:
|
||||
0 if method run or test ended successfully.
|
||||
0 if method run or test ended succesfully.
|
||||
1 if connection failed.
|
||||
2 if expect timeouts without prompt or EOF.
|
||||
|
||||
@@ -1005,11 +628,10 @@ class nodes:
|
||||
|
||||
|
||||
@MethodHook
|
||||
def run(self, commands, vars = None,*, folder = None, prompt = None, stdout = None, parallel = 10, timeout = None, on_complete = None, logger = None):
|
||||
def run(self, commands, vars = None,*, folder = None, prompt = None, stdout = None, parallel = 10, timeout = None):
|
||||
'''
|
||||
Run a command or list of commands on all the nodes in nodelist.
|
||||
|
||||
|
||||
### Parameters:
|
||||
|
||||
- commands (str/list): Commands to run on the nodes. Should be str or
|
||||
@@ -1047,11 +669,6 @@ class nodes:
|
||||
- timeout (int): Time in seconds for expect to wait for prompt/EOF.
|
||||
default 10.
|
||||
|
||||
- on_complete (callable): Optional callback called when each node
|
||||
finishes. Receives (unique, output, status).
|
||||
Called from the node's thread so it must
|
||||
be thread-safe.
|
||||
|
||||
###Returns:
|
||||
|
||||
dict: Dictionary formed by nodes unique as keys, Output of the
|
||||
@@ -1066,46 +683,23 @@ class nodes:
|
||||
Path(folder).mkdir(parents=True, exist_ok=True)
|
||||
if prompt != None:
|
||||
args["prompt"] = prompt
|
||||
if stdout != None and on_complete is None:
|
||||
if stdout != None:
|
||||
args["stdout"] = stdout
|
||||
if timeout != None:
|
||||
args["timeout"] = timeout
|
||||
output = {}
|
||||
status = {}
|
||||
tasks = []
|
||||
|
||||
def _run_node(node_obj, node_args, callback):
|
||||
"""Wrapper that runs a node and fires the callback on completion."""
|
||||
node_obj.run(**node_args)
|
||||
if callback:
|
||||
callback(node_obj.unique, node_obj.output, node_obj.status)
|
||||
|
||||
for n in self.nodelist:
|
||||
nodesargs[n.unique] = deepcopy(args)
|
||||
if vars != None:
|
||||
nodesargs[n.unique]["vars"] = {}
|
||||
if "__global__" in vars.keys():
|
||||
nodesargs[n.unique]["vars"].update(vars["__global__"])
|
||||
for var_key, var_val in vars.items():
|
||||
if var_key == "__global__":
|
||||
continue
|
||||
try:
|
||||
if re.search(var_key, n.unique, re.IGNORECASE):
|
||||
nodesargs[n.unique]["vars"].update(var_val)
|
||||
except re.error:
|
||||
if var_key == n.unique:
|
||||
nodesargs[n.unique]["vars"].update(var_val)
|
||||
|
||||
# Pass the logger to the node
|
||||
nodesargs[n.unique]["logger"] = logger
|
||||
|
||||
if on_complete:
|
||||
tasks.append(threading.Thread(target=_run_node, args=(n, nodesargs[n.unique], on_complete)))
|
||||
else:
|
||||
tasks.append(threading.Thread(target=n.run, kwargs=nodesargs[n.unique]))
|
||||
|
||||
if n.unique in vars.keys():
|
||||
nodesargs[n.unique]["vars"].update(vars[n.unique])
|
||||
tasks.append(threading.Thread(target=n.run, kwargs=nodesargs[n.unique]))
|
||||
taskslist = list(self._splitlist(tasks, parallel))
|
||||
|
||||
for t in taskslist:
|
||||
for i in t:
|
||||
i.start()
|
||||
@@ -1119,11 +713,10 @@ class nodes:
|
||||
return output
|
||||
|
||||
@MethodHook
|
||||
def test(self, commands, expected, vars = None,*, folder = None, prompt = None, parallel = 10, timeout = None, on_complete = None, logger = None):
|
||||
def test(self, commands, expected, vars = None,*, prompt = None, parallel = 10, timeout = None):
|
||||
'''
|
||||
Run a command or list of commands on all the nodes in nodelist, then check if expected value appears on the output after the last command.
|
||||
|
||||
|
||||
### Parameters:
|
||||
|
||||
- commands (str/list): Commands to run on the node. Should be str or
|
||||
@@ -1158,11 +751,6 @@ class nodes:
|
||||
- timeout (int): Time in seconds for expect to wait for prompt/EOF.
|
||||
default 10.
|
||||
|
||||
- on_complete (callable): Optional callback called when each node
|
||||
finishes. Receives (unique, output, status).
|
||||
Called from the node's thread so it must
|
||||
be thread-safe.
|
||||
|
||||
### Returns:
|
||||
|
||||
dict: Dictionary formed by nodes unique as keys, value is True if
|
||||
@@ -1174,9 +762,6 @@ class nodes:
|
||||
nodesargs = {}
|
||||
args["commands"] = commands
|
||||
args["expected"] = expected
|
||||
if folder != None:
|
||||
args["folder"] = folder
|
||||
Path(folder).mkdir(parents=True, exist_ok=True)
|
||||
if prompt != None:
|
||||
args["prompt"] = prompt
|
||||
if timeout != None:
|
||||
@@ -1185,35 +770,15 @@ class nodes:
|
||||
result = {}
|
||||
status = {}
|
||||
tasks = []
|
||||
|
||||
def _test_node(node_obj, node_args, callback):
|
||||
"""Wrapper that runs a node test and fires the callback on completion."""
|
||||
node_obj.test(**node_args)
|
||||
if callback:
|
||||
callback(node_obj.unique, node_obj.output, node_obj.status, node_obj.result)
|
||||
|
||||
for n in self.nodelist:
|
||||
nodesargs[n.unique] = deepcopy(args)
|
||||
if vars != None:
|
||||
nodesargs[n.unique]["vars"] = {}
|
||||
if "__global__" in vars.keys():
|
||||
nodesargs[n.unique]["vars"].update(vars["__global__"])
|
||||
for var_key, var_val in vars.items():
|
||||
if var_key == "__global__":
|
||||
continue
|
||||
try:
|
||||
if re.search(var_key, n.unique, re.IGNORECASE):
|
||||
nodesargs[n.unique]["vars"].update(var_val)
|
||||
except re.error:
|
||||
if var_key == n.unique:
|
||||
nodesargs[n.unique]["vars"].update(var_val)
|
||||
nodesargs[n.unique]["logger"] = logger
|
||||
|
||||
if on_complete:
|
||||
tasks.append(threading.Thread(target=_test_node, args=(n, nodesargs[n.unique], on_complete)))
|
||||
else:
|
||||
tasks.append(threading.Thread(target=n.test, kwargs=nodesargs[n.unique]))
|
||||
|
||||
if n.unique in vars.keys():
|
||||
nodesargs[n.unique]["vars"].update(vars[n.unique])
|
||||
tasks.append(threading.Thread(target=n.test, kwargs=nodesargs[n.unique]))
|
||||
taskslist = list(self._splitlist(tasks, parallel))
|
||||
for t in taskslist:
|
||||
for i in t:
|
||||
|
||||
@@ -1,402 +0,0 @@
|
||||
import argparse
|
||||
import sys
|
||||
|
||||
class Parser:
|
||||
def __init__(self):
|
||||
self.parser = argparse.ArgumentParser(description="Capture packets remotely using a saved SSH node", epilog="All unknown arguments will be passed to tcpdump.")
|
||||
|
||||
self.parser.add_argument("node", nargs='?', help="Name of the saved node (must use SSH)")
|
||||
self.parser.add_argument("interface", nargs='?', help="Network interface to capture on")
|
||||
self.parser.add_argument("--ns", "--namespace", dest="namespace", help="Optional network namespace")
|
||||
self.parser.add_argument("-w","--wireshark", action="store_true", help="Open live capture in Wireshark")
|
||||
self.parser.add_argument("--set-wireshark-path", metavar="PATH", help="Set the default path to Wireshark binary")
|
||||
self.parser.add_argument(
|
||||
"-f", "--filter",
|
||||
dest="tcpdump_filter",
|
||||
metavar="ARG",
|
||||
nargs="*",
|
||||
default=["not", "port", "22"],
|
||||
help="tcpdump filter expression (e.g., -f port 443 and udp). Default: not port 22"
|
||||
)
|
||||
self.parser.add_argument(
|
||||
"--unknown-args",
|
||||
action="store_true",
|
||||
default=True,
|
||||
help=argparse.SUPPRESS
|
||||
)
|
||||
|
||||
class Entrypoint:
|
||||
@staticmethod
|
||||
def get_remote_capture_class():
|
||||
import subprocess
|
||||
import random
|
||||
import socket
|
||||
import time
|
||||
import threading
|
||||
from pexpect import TIMEOUT
|
||||
from connpy import printer
|
||||
|
||||
class RemoteCapture:
|
||||
def __init__(self, connapp, node_name, interface, namespace=None, use_wireshark=False, tcpdump_filter=None, tcpdump_args=None):
|
||||
self.connapp = connapp
|
||||
self.node_name = node_name
|
||||
self.interface = interface
|
||||
self.namespace = namespace
|
||||
self.use_wireshark = use_wireshark
|
||||
self.tcpdump_filter = tcpdump_filter or []
|
||||
self.tcpdump_args = tcpdump_args if isinstance(tcpdump_args, list) else []
|
||||
|
||||
if node_name.startswith("@"): # fuzzy match
|
||||
matches = self.connapp.services.nodes.list_nodes(node_name)
|
||||
else:
|
||||
matches = self.connapp.services.nodes.list_nodes(f"^{node_name}")
|
||||
|
||||
if not matches:
|
||||
printer.error(f"Node '{node_name}' not found.")
|
||||
sys.exit(2)
|
||||
elif len(matches) > 1:
|
||||
from ..cli.helpers import choose
|
||||
matches[0] = choose(self.connapp, matches, "node", "capture")
|
||||
|
||||
if matches[0] is None:
|
||||
sys.exit(7)
|
||||
|
||||
node_data = self.connapp.services.nodes.get_node_details(matches[0])
|
||||
self.node = self.connapp.node(matches[0], **node_data, config=self.connapp.config)
|
||||
|
||||
if self.node.protocol != "ssh":
|
||||
printer.error(f"Node '{self.node.unique}' must be an SSH connection.")
|
||||
sys.exit(2)
|
||||
|
||||
settings = self.connapp.services.config_svc.get_settings()
|
||||
self.wireshark_path = settings.get("wireshark_path")
|
||||
|
||||
def _start_local_listener(self, port, ws_proc=None):
|
||||
self.fake_connection = False
|
||||
self.listener_active = True
|
||||
self.listener_conn = None
|
||||
self.listener_connected = threading.Event()
|
||||
|
||||
def listen():
|
||||
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
|
||||
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
|
||||
s.bind(("localhost", port))
|
||||
s.listen(1)
|
||||
printer.start(f"Listening on localhost:{port}")
|
||||
|
||||
conn, addr = s.accept()
|
||||
self.listener_conn = conn
|
||||
if not self.fake_connection:
|
||||
printer.start(f"Connection from {addr}")
|
||||
self.listener_connected.set()
|
||||
|
||||
try:
|
||||
while self.listener_active:
|
||||
data = conn.recv(4096)
|
||||
if not data:
|
||||
break
|
||||
|
||||
if self.use_wireshark and ws_proc:
|
||||
try:
|
||||
ws_proc.stdin.write(data)
|
||||
ws_proc.stdin.flush()
|
||||
except BrokenPipeError:
|
||||
printer.info("Wireshark closed the pipe.")
|
||||
break
|
||||
else:
|
||||
sys.stdout.buffer.write(data)
|
||||
sys.stdout.buffer.flush()
|
||||
except Exception as e:
|
||||
if isinstance(e, BrokenPipeError):
|
||||
printer.info("Listener closed due to broken pipe.")
|
||||
else:
|
||||
printer.error(f"Listener error: {e}")
|
||||
finally:
|
||||
conn.close()
|
||||
self.listener_conn = None
|
||||
|
||||
self.listener_thread = threading.Thread(target=listen)
|
||||
self.listener_thread.daemon = True
|
||||
self.listener_thread.start()
|
||||
|
||||
def _is_port_in_use(self, port):
|
||||
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
|
||||
return s.connect_ex(('localhost', port)) == 0
|
||||
|
||||
def _find_free_port(self, start=20000, end=30000):
|
||||
for _ in range(10):
|
||||
port = random.randint(start, end)
|
||||
if not self._is_port_in_use(port):
|
||||
return port
|
||||
printer.error("No free port found for SSH tunnel.")
|
||||
sys.exit(1)
|
||||
|
||||
def _monitor_wireshark(self, ws_proc):
|
||||
try:
|
||||
while True:
|
||||
try:
|
||||
ws_proc.wait(timeout=1)
|
||||
self.listener_active = False
|
||||
if self.listener_conn:
|
||||
printer.info("Wireshark exited, stopping listener.")
|
||||
try:
|
||||
self.listener_conn.shutdown(socket.SHUT_RDWR)
|
||||
self.listener_conn.close()
|
||||
except Exception:
|
||||
pass
|
||||
break
|
||||
except subprocess.TimeoutExpired:
|
||||
if not self.listener_active:
|
||||
break
|
||||
time.sleep(0.2)
|
||||
except Exception as e:
|
||||
printer.warning(f"Error in monitor_wireshark: {e}")
|
||||
|
||||
def _detect_sudo_requirement(self):
|
||||
base_cmd = f"tcpdump -i {self.interface} -w - -U -c 1"
|
||||
if self.namespace:
|
||||
base_cmd = f"ip netns exec {self.namespace} {base_cmd}"
|
||||
|
||||
cmds = [base_cmd, f"sudo {base_cmd}"]
|
||||
|
||||
printer.info(f"Verifying sudo requirement")
|
||||
for cmd in cmds:
|
||||
try:
|
||||
self.node.child.sendline(cmd)
|
||||
start_time = time.time()
|
||||
while time.time() - start_time < 3:
|
||||
try:
|
||||
index = self.node.child.expect([
|
||||
r'listening on',
|
||||
r'permission denied',
|
||||
r'cannot',
|
||||
r'No such file or directory',
|
||||
], timeout=1)
|
||||
|
||||
if index == 0:
|
||||
self.node.child.send("\x03")
|
||||
return "sudo" in cmd
|
||||
else:
|
||||
break
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
self.node.child.send("\x03")
|
||||
time.sleep(0.5)
|
||||
try:
|
||||
self.node.child.read_nonblocking(size=1024, timeout=0.5)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
except Exception as e:
|
||||
printer.warning(f"Error during sudo detection: {e}")
|
||||
continue
|
||||
|
||||
printer.error(f"Failed to run tcpdump on remote node '{self.node.unique}'")
|
||||
sys.exit(4)
|
||||
|
||||
def _monitor_capture_output(self):
|
||||
try:
|
||||
index = self.node.child.expect([
|
||||
r'Broken pipe',
|
||||
r'packet[s]? captured'
|
||||
], timeout=None)
|
||||
if index == 0:
|
||||
printer.error("Tcpdump failed: Broken pipe.")
|
||||
else:
|
||||
printer.success("Tcpdump finished capturing packets.")
|
||||
|
||||
self.listener_active = False
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def _sendline_until_connected(self, cmd, retries=5, interval=2):
|
||||
for attempt in range(1, retries + 1):
|
||||
printer.info(f"Attempt {attempt}/{retries} to connect listener...")
|
||||
self.node.child.sendline(cmd)
|
||||
|
||||
try:
|
||||
index = self.node.child.expect([
|
||||
r'listening on',
|
||||
TIMEOUT,
|
||||
r'permission',
|
||||
r'not permitted',
|
||||
r'invalid',
|
||||
r'unrecognized',
|
||||
r'Unable',
|
||||
r'No such',
|
||||
r'illegal',
|
||||
r'not found',
|
||||
r'non-ether',
|
||||
r'syntax error'
|
||||
], timeout=5)
|
||||
|
||||
if index == 0:
|
||||
self.monitor_end = threading.Thread(target=self._monitor_capture_output)
|
||||
self.monitor_end.daemon = True
|
||||
self.monitor_end.start()
|
||||
|
||||
if self.listener_connected.wait(timeout=interval):
|
||||
printer.success("Listener successfully received a connection.")
|
||||
return True
|
||||
else:
|
||||
printer.warning("No connection yet. Retrying...")
|
||||
|
||||
elif index == 1:
|
||||
error = f"tcpdump did not respond within the expected time.\nCommand used:\n{cmd}\n\u2192 Please verify the command syntax."
|
||||
return f"{error}"
|
||||
else:
|
||||
before_last_line = self.node.child.before.decode().splitlines()[-1]
|
||||
error = f"Tcpdump error detected: {before_last_line}{self.node.child.after.decode()}{self.node.child.readline().decode()}".rstrip()
|
||||
return f"{error}"
|
||||
|
||||
except Exception as e:
|
||||
printer.warning(f"Unexpected error during tcpdump startup: {e}")
|
||||
return False
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def _build_tcpdump_command(self):
|
||||
base = f"tcpdump -i {self.interface}"
|
||||
if self.use_wireshark:
|
||||
base += " -w - -U"
|
||||
else:
|
||||
base += " -l"
|
||||
|
||||
if self.namespace:
|
||||
base = f"ip netns exec {self.namespace} {base}"
|
||||
|
||||
if self.requires_sudo:
|
||||
base = f"sudo {base}"
|
||||
|
||||
if self.tcpdump_args:
|
||||
base += " " + " ".join(self.tcpdump_args)
|
||||
|
||||
if self.tcpdump_filter:
|
||||
base += " " + " ".join(self.tcpdump_filter)
|
||||
|
||||
base += f" | nc localhost {self.local_port}"
|
||||
return base
|
||||
|
||||
def run(self):
|
||||
if self.use_wireshark:
|
||||
if not self.wireshark_path:
|
||||
printer.error("Wireshark path not set in config.\nUse '--set-wireshark-path /full/path/to/wireshark' to configure it.")
|
||||
sys.exit(1)
|
||||
|
||||
self.local_port = self._find_free_port()
|
||||
self.node.options += f" -o ExitOnForwardFailure=yes -R {self.local_port}:localhost:{self.local_port}"
|
||||
|
||||
connection = self.node._connect()
|
||||
if connection is not True:
|
||||
printer.error(f"Could not connect to {self.node.unique}\n{connection}")
|
||||
sys.exit(1)
|
||||
|
||||
self.requires_sudo = self._detect_sudo_requirement()
|
||||
tcpdump_cmd = self._build_tcpdump_command()
|
||||
|
||||
ws_proc = None
|
||||
monitor_thread = None
|
||||
|
||||
if self.use_wireshark:
|
||||
printer.info(f"Live capture from {self.node.unique}:{self.interface}, launching Wireshark...")
|
||||
try:
|
||||
ws_proc = subprocess.Popen([self.wireshark_path, "-k", "-i", "-"], stdin=subprocess.PIPE, stderr=subprocess.PIPE)
|
||||
except Exception as e:
|
||||
printer.error(f"Failed to launch Wireshark: {e}\nMake sure the path is correct and Wireshark is installed.")
|
||||
exit(1)
|
||||
|
||||
monitor_thread = threading.Thread(target=self._monitor_wireshark, args=(ws_proc,))
|
||||
monitor_thread.daemon = True
|
||||
monitor_thread.start()
|
||||
else:
|
||||
printer.info(f"Live text capture from {self.node.unique}:{self.interface}")
|
||||
printer.info("Press Ctrl+C to stop.\n")
|
||||
|
||||
try:
|
||||
self._start_local_listener(self.local_port, ws_proc=ws_proc)
|
||||
time.sleep(1)
|
||||
|
||||
result = self._sendline_until_connected(tcpdump_cmd, retries=5, interval=2)
|
||||
if result is not True:
|
||||
if isinstance(result, str):
|
||||
printer.error(f"{result}")
|
||||
else:
|
||||
printer.error("Listener connection failed after all retries.")
|
||||
self.listener_active = False
|
||||
return
|
||||
|
||||
while self.listener_active:
|
||||
time.sleep(0.5)
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("")
|
||||
printer.warning("Capture interrupted by user.")
|
||||
self.listener_active = False
|
||||
finally:
|
||||
if self.listener_conn:
|
||||
try:
|
||||
self.listener_conn.shutdown(socket.SHUT_RDWR)
|
||||
self.listener_conn.close()
|
||||
except OSError: pass
|
||||
if hasattr(self.node, "child"):
|
||||
self.node.child.close(force=True)
|
||||
|
||||
return RemoteCapture
|
||||
|
||||
def __init__(self, args, parser, connapp):
|
||||
from connpy import printer
|
||||
if "--" in args.unknown_args:
|
||||
args.unknown_args.remove("--")
|
||||
if args.set_wireshark_path:
|
||||
connapp.services.config_svc.update_setting("wireshark_path", args.set_wireshark_path)
|
||||
printer.success(f"Wireshark path updated to: {args.set_wireshark_path}")
|
||||
return
|
||||
|
||||
if not args.node or not args.interface:
|
||||
parser.error("node and interface are required unless --set-wireshark-path is used")
|
||||
|
||||
RemoteCapture = self.get_remote_capture_class()
|
||||
capture = RemoteCapture(
|
||||
connapp=connapp, node_name=args.node, interface=args.interface,
|
||||
namespace=args.namespace, use_wireshark=args.wireshark,
|
||||
tcpdump_filter=args.tcpdump_filter, tcpdump_args=args.unknown_args
|
||||
)
|
||||
capture.run()
|
||||
|
||||
def _connpy_tree(info=None):
|
||||
"""Declarative completion tree for the capture plugin following completion.py patterns."""
|
||||
nodes = info.get("nodes", []) if info else []
|
||||
|
||||
|
||||
|
||||
# State 2: Main capture loop (No setup flag here)
|
||||
capture_main = {"__exclude_used__": True}
|
||||
|
||||
# Inline logic to suggest nodes only if no positional has been provided yet
|
||||
get_nodes = lambda w: nodes if not [x for x in w[:-1] if not x.startswith("-") and x != "capture"] else []
|
||||
capture_main["__extra__"] = get_nodes
|
||||
capture_main["*"] = capture_main
|
||||
|
||||
for f in ["--wireshark", "-w", "--help", "-h"]:
|
||||
capture_main[f] = capture_main
|
||||
for f in ["--namespace", "--filter", "-f"]:
|
||||
capture_main[f] = {"*": capture_main}
|
||||
|
||||
# State 1: Start (Highly discoverable configuration)
|
||||
capture_start = {
|
||||
"__exclude_used__": True,
|
||||
"__extra__": get_nodes,
|
||||
"--set-wireshark-path": {"__extra__": lambda w: get_cwd(w, "--set-wireshark-path")}
|
||||
}
|
||||
|
||||
# Transitions from start to main
|
||||
for f in ["--wireshark", "-w", "--help", "-h"]:
|
||||
capture_start[f] = capture_main
|
||||
for f in ["--namespace", "--filter", "-f"]:
|
||||
capture_start[f] = {"*": capture_main}
|
||||
|
||||
capture_start["*"] = capture_main
|
||||
|
||||
return capture_start
|
||||
Executable
+378
@@ -0,0 +1,378 @@
|
||||
#!/usr/bin/python3
|
||||
import argparse
|
||||
import os
|
||||
import time
|
||||
import zipfile
|
||||
import tempfile
|
||||
import io
|
||||
import yaml
|
||||
import threading
|
||||
from google.oauth2.credentials import Credentials
|
||||
from google.auth.transport.requests import Request
|
||||
from googleapiclient.discovery import build
|
||||
from google.auth.exceptions import RefreshError
|
||||
from google_auth_oauthlib.flow import InstalledAppFlow
|
||||
from googleapiclient.http import MediaFileUpload,MediaIoBaseDownload
|
||||
from googleapiclient.errors import HttpError
|
||||
from datetime import datetime
|
||||
|
||||
class sync:
|
||||
|
||||
def __init__(self, connapp):
|
||||
self.scopes = ['https://www.googleapis.com/auth/drive.appdata']
|
||||
self.token_file = f"{connapp.config.defaultdir}/gtoken.json"
|
||||
self.file = connapp.config.file
|
||||
self.key = connapp.config.key
|
||||
self.google_client = f"{os.path.dirname(os.path.abspath(__file__))}/sync_client"
|
||||
self.connapp = connapp
|
||||
try:
|
||||
self.sync = self.connapp.config.config["sync"]
|
||||
except:
|
||||
self.sync = False
|
||||
|
||||
def login(self):
|
||||
creds = None
|
||||
# The file token.json stores the user's access and refresh tokens.
|
||||
if os.path.exists(self.token_file):
|
||||
creds = Credentials.from_authorized_user_file(self.token_file, self.scopes)
|
||||
|
||||
try:
|
||||
# If there are no valid credentials available, let the user log in.
|
||||
if not creds or not creds.valid:
|
||||
if creds and creds.expired and creds.refresh_token:
|
||||
creds.refresh(Request())
|
||||
else:
|
||||
flow = InstalledAppFlow.from_client_secrets_file(
|
||||
self.google_client, self.scopes)
|
||||
creds = flow.run_local_server(port=0, access_type='offline')
|
||||
|
||||
# Save the credentials for the next run
|
||||
with open(self.token_file, 'w') as token:
|
||||
token.write(creds.to_json())
|
||||
|
||||
print("Logged in successfully.")
|
||||
|
||||
except RefreshError as e:
|
||||
# If refresh fails, delete the invalid token file and start a new login flow
|
||||
if os.path.exists(self.token_file):
|
||||
os.remove(self.token_file)
|
||||
print("Existing token was invalid and has been removed. Please log in again.")
|
||||
flow = InstalledAppFlow.from_client_secrets_file(
|
||||
self.google_client, self.scopes)
|
||||
creds = flow.run_local_server(port=0, access_type='offline')
|
||||
with open(self.token_file, 'w') as token:
|
||||
token.write(creds.to_json())
|
||||
print("Logged in successfully after re-authentication.")
|
||||
|
||||
def logout(self):
|
||||
if os.path.exists(self.token_file):
|
||||
os.remove(self.token_file)
|
||||
print("Logged out successfully.")
|
||||
else:
|
||||
print("No credentials file found. Already logged out.")
|
||||
|
||||
def get_credentials(self):
|
||||
# Load credentials from token.json
|
||||
if os.path.exists(self.token_file):
|
||||
creds = Credentials.from_authorized_user_file(self.token_file, self.scopes)
|
||||
else:
|
||||
print("Credentials file not found.")
|
||||
return 0
|
||||
|
||||
# If there are no valid credentials available, ask the user to log in again
|
||||
if not creds or not creds.valid:
|
||||
if creds and creds.expired and creds.refresh_token:
|
||||
try:
|
||||
creds.refresh(Request())
|
||||
except RefreshError:
|
||||
print("Could not refresh access token. Please log in again.")
|
||||
return 0
|
||||
else:
|
||||
print("Credentials are missing or invalid. Please log in.")
|
||||
return 0
|
||||
return creds
|
||||
|
||||
def check_login_status(self):
|
||||
# Check if the credentials file exists
|
||||
if os.path.exists(self.token_file):
|
||||
# Load credentials from token.json
|
||||
creds = Credentials.from_authorized_user_file(self.token_file)
|
||||
|
||||
# If credentials are expired, refresh them
|
||||
if creds and creds.expired and creds.refresh_token:
|
||||
try:
|
||||
creds.refresh(Request())
|
||||
except RefreshError:
|
||||
pass
|
||||
|
||||
# Check if the credentials are valid after refresh
|
||||
if creds.valid:
|
||||
return True
|
||||
else:
|
||||
return "Invalid"
|
||||
else:
|
||||
return False
|
||||
|
||||
def status(self):
|
||||
print(f"Login: {self.check_login_status()}")
|
||||
print(f"Sync: {self.sync}")
|
||||
|
||||
|
||||
def get_appdata_files(self):
|
||||
|
||||
creds = self.get_credentials()
|
||||
if not creds:
|
||||
return 0
|
||||
|
||||
try:
|
||||
# Create the Google Drive service
|
||||
service = build("drive", "v3", credentials=creds)
|
||||
|
||||
# List files in the appDataFolder
|
||||
response = (
|
||||
service.files()
|
||||
.list(
|
||||
spaces="appDataFolder",
|
||||
fields="files(id, name, appProperties)",
|
||||
pageSize=10,
|
||||
)
|
||||
.execute()
|
||||
)
|
||||
|
||||
files_info = []
|
||||
for file in response.get("files", []):
|
||||
# Extract file information
|
||||
file_id = file.get("id")
|
||||
file_name = file.get("name")
|
||||
timestamp = file.get("appProperties", {}).get("timestamp")
|
||||
human_readable_date = file.get("appProperties", {}).get("date")
|
||||
files_info.append({"name": file_name, "id": file_id, "date": human_readable_date, "timestamp": timestamp})
|
||||
|
||||
return files_info
|
||||
|
||||
except HttpError as error:
|
||||
print(f"An error occurred: {error}")
|
||||
return 0
|
||||
|
||||
|
||||
def dump_appdata_files_yaml(self):
|
||||
files_info = self.get_appdata_files()
|
||||
if not files_info:
|
||||
print("Failed to retrieve files or no files found.")
|
||||
return
|
||||
# Pretty print as YAML
|
||||
yaml_output = yaml.dump(files_info, sort_keys=False, default_flow_style=False)
|
||||
print(yaml_output)
|
||||
|
||||
|
||||
def backup_file_to_drive(self, file_path, timestamp):
|
||||
|
||||
creds = self.get_credentials()
|
||||
if not creds:
|
||||
return 1
|
||||
|
||||
# Create the Google Drive service
|
||||
service = build('drive', 'v3', credentials=creds)
|
||||
|
||||
# Convert timestamp to a human-readable date
|
||||
human_readable_date = datetime.fromtimestamp(timestamp/1000).strftime('%Y-%m-%d %H:%M:%S')
|
||||
|
||||
# Upload the file to Google Drive with timestamp metadata
|
||||
file_metadata = {
|
||||
'name': os.path.basename(file_path),
|
||||
'parents': ["appDataFolder"],
|
||||
'appProperties': {
|
||||
'timestamp': str(timestamp),
|
||||
'date': human_readable_date # Add human-readable date attribute
|
||||
}
|
||||
}
|
||||
media = MediaFileUpload(file_path)
|
||||
|
||||
try:
|
||||
file = service.files().create(body=file_metadata, media_body=media, fields='id').execute()
|
||||
return 0
|
||||
except Exception as e:
|
||||
return f"An error occurred: {e}"
|
||||
|
||||
def delete_file_by_id(self, file_id):
|
||||
creds = self.get_credentials()
|
||||
if not creds:
|
||||
return 1
|
||||
|
||||
try:
|
||||
# Create the Google Drive service
|
||||
service = build("drive", "v3", credentials=creds)
|
||||
|
||||
# Delete the file
|
||||
service.files().delete(fileId=file_id).execute()
|
||||
return 0
|
||||
except Exception as e:
|
||||
return f"An error occurred: {e}"
|
||||
|
||||
def compress_specific_files(self, zip_path):
|
||||
with zipfile.ZipFile(zip_path, 'w', zipfile.ZIP_DEFLATED) as zipf:
|
||||
zipf.write(self.file, "config.json")
|
||||
zipf.write(self.key, ".osk")
|
||||
|
||||
def compress_and_upload(self):
|
||||
# Read the file content to get the folder path
|
||||
timestamp = int(time.time() * 1000)
|
||||
# Create a temporary directory for storing the zip file
|
||||
with tempfile.TemporaryDirectory() as tmp_dir:
|
||||
# Compress specific files from the folder path to a zip file in the temporary directory
|
||||
zip_path = os.path.join(tmp_dir, f"connpy-backup-{timestamp}.zip")
|
||||
self.compress_specific_files(zip_path)
|
||||
|
||||
# Get the files in the app data folder
|
||||
app_data_files = self.get_appdata_files()
|
||||
if app_data_files == 0:
|
||||
return 1
|
||||
|
||||
# If there are 10 or more files, remove the oldest one based on timestamp
|
||||
if len(app_data_files) >= 10:
|
||||
oldest_file = min(app_data_files, key=lambda x: x['timestamp'])
|
||||
delete_old = self.delete_file_by_id(oldest_file['id'])
|
||||
if delete_old:
|
||||
print(delete_old)
|
||||
return 1
|
||||
|
||||
# Upload the new file
|
||||
upload_new = self.backup_file_to_drive(zip_path, timestamp)
|
||||
if upload_new:
|
||||
print(upload_new)
|
||||
return 1
|
||||
|
||||
print("Backup to google uploaded successfully.")
|
||||
return 0
|
||||
|
||||
def decompress_zip(self, zip_path):
|
||||
try:
|
||||
with zipfile.ZipFile(zip_path, 'r') as zipf:
|
||||
# Extract the specific file to the specified destination
|
||||
zipf.extract("config.json", os.path.dirname(self.file))
|
||||
zipf.extract(".osk", os.path.dirname(self.key))
|
||||
return 0
|
||||
except Exception as e:
|
||||
print(f"An error occurred: {e}")
|
||||
return 1
|
||||
|
||||
def download_file_by_id(self, file_id, destination_path):
|
||||
|
||||
creds = self.get_credentials()
|
||||
if not creds:
|
||||
return 1
|
||||
|
||||
try:
|
||||
# Create the Google Drive service
|
||||
service = build('drive', 'v3', credentials=creds)
|
||||
|
||||
# Download the file
|
||||
request = service.files().get_media(fileId=file_id)
|
||||
fh = io.FileIO(destination_path, mode='wb')
|
||||
downloader = MediaIoBaseDownload(fh, request)
|
||||
done = False
|
||||
while done is False:
|
||||
status, done = downloader.next_chunk()
|
||||
|
||||
return 0
|
||||
except Exception as e:
|
||||
return f"An error occurred: {e}"
|
||||
|
||||
def restore_last_config(self, file_id=None):
|
||||
# Get the files in the app data folder
|
||||
app_data_files = self.get_appdata_files()
|
||||
if not app_data_files:
|
||||
print("No files found in app data folder.")
|
||||
return 1
|
||||
|
||||
# Check if a specific file_id was provided and if it exists in the list
|
||||
if file_id:
|
||||
selected_file = next((f for f in app_data_files if f['id'] == file_id), None)
|
||||
if not selected_file:
|
||||
print(f"No file found with ID: {file_id}")
|
||||
return 1
|
||||
else:
|
||||
# Find the latest file based on timestamp
|
||||
selected_file = max(app_data_files, key=lambda x: x['timestamp'])
|
||||
|
||||
# Download the selected file to a temporary location
|
||||
temp_download_path = os.path.join(tempfile.gettempdir(), 'connpy-backup.zip')
|
||||
if self.download_file_by_id(selected_file['id'], temp_download_path):
|
||||
return 1
|
||||
|
||||
# Unzip the downloaded file to the destination folder
|
||||
if self.decompress_zip(temp_download_path):
|
||||
print("Failed to decompress the file.")
|
||||
return 1
|
||||
|
||||
print(f"Backup from Google Drive restored successfully: {selected_file['name']}")
|
||||
return 0
|
||||
|
||||
def config_listener_post(self, args, kwargs):
|
||||
if self.sync:
|
||||
if self.check_login_status() == True:
|
||||
if not kwargs["result"]:
|
||||
self.compress_and_upload()
|
||||
else:
|
||||
print("Sync cannot be performed. Please check your login status.")
|
||||
return kwargs["result"]
|
||||
|
||||
def config_listener_pre(self, *args, **kwargs):
|
||||
try:
|
||||
self.sync = self.connapp.config.config["sync"]
|
||||
except:
|
||||
self.sync = False
|
||||
return args, kwargs
|
||||
|
||||
def start_post_thread(self, *args, **kwargs):
|
||||
post_thread = threading.Thread(target=self.config_listener_post, args=(args,kwargs))
|
||||
post_thread.start()
|
||||
|
||||
class Preload:
|
||||
def __init__(self, connapp):
|
||||
syncapp = sync(connapp)
|
||||
connapp.config._saveconfig.register_post_hook(syncapp.start_post_thread)
|
||||
connapp.config._saveconfig.register_pre_hook(syncapp.config_listener_pre)
|
||||
|
||||
class Parser:
|
||||
def __init__(self):
|
||||
self.parser = argparse.ArgumentParser(description="Sync config with Google")
|
||||
self.description = "Sync config with Google"
|
||||
subparsers = self.parser.add_subparsers(title="Commands", dest='command',metavar="")
|
||||
login_parser = subparsers.add_parser("login", help="Login to Google to enable synchronization")
|
||||
logout_parser = subparsers.add_parser("logout", help="Logout from Google")
|
||||
start_parser = subparsers.add_parser("start", help="Start synchronizing with Google")
|
||||
stop_parser = subparsers.add_parser("stop", help="Stop any ongoing synchronization")
|
||||
restore_parser = subparsers.add_parser("restore", help="Restore data from Google")
|
||||
backup_parser = subparsers.add_parser("once", help="Backup current configuration to Google once")
|
||||
restore_parser.add_argument("--id", type=str, help="Optional file ID to restore a specific backup", required=False)
|
||||
status_parser = subparsers.add_parser("status", help="Check the current status of synchronization")
|
||||
list_parser = subparsers.add_parser("list", help="List all backups stored on Google")
|
||||
|
||||
class Entrypoint:
|
||||
def __init__(self, args, parser, connapp):
|
||||
syncapp = sync(connapp)
|
||||
if args.command == 'login':
|
||||
syncapp.login()
|
||||
elif args.command == "status":
|
||||
syncapp.status()
|
||||
elif args.command == "start":
|
||||
connapp._change_settings("sync", True)
|
||||
elif args.command == "stop":
|
||||
connapp._change_settings("sync", False)
|
||||
elif args.command == "list":
|
||||
syncapp.dump_appdata_files_yaml()
|
||||
elif args.command == "once":
|
||||
syncapp.compress_and_upload()
|
||||
elif args.command == "restore":
|
||||
syncapp.restore_last_config(args.id)
|
||||
elif args.command == "logout":
|
||||
syncapp.logout()
|
||||
|
||||
def _connpy_completion(wordsnumber, words, info = None):
|
||||
if wordsnumber == 3:
|
||||
result = ["--help", "login", "status", "start", "stop", "list", "once", "restore", "logout"]
|
||||
#NETMASK_completion
|
||||
if wordsnumber == 4 and words[1] == "restore":
|
||||
result = ["--help", "--id"]
|
||||
return result
|
||||
@@ -1,8 +0,0 @@
|
||||
import sys
|
||||
import os
|
||||
|
||||
# gRPC generated files use absolute imports that assume their directory is in sys.path.
|
||||
# We add this directory to sys.path to allow imports like 'import connpy_pb2' to succeed.
|
||||
current_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
if current_dir not in sys.path:
|
||||
sys.path.insert(0, current_dir)
|
||||
File diff suppressed because one or more lines are too long
File diff suppressed because it is too large
Load Diff
@@ -1,25 +0,0 @@
|
||||
syntax = "proto3";
|
||||
package connpy_remote;
|
||||
|
||||
message IdRequest {
|
||||
string id = 1;
|
||||
}
|
||||
|
||||
message StringResponse {
|
||||
string value = 1;
|
||||
}
|
||||
|
||||
message PluginInvokeRequest {
|
||||
string name = 1;
|
||||
string args_json = 2;
|
||||
}
|
||||
|
||||
message OutputChunk {
|
||||
string text = 1;
|
||||
bool is_error = 2;
|
||||
}
|
||||
|
||||
service RemotePluginService {
|
||||
rpc get_plugin_source(IdRequest) returns (StringResponse);
|
||||
rpc invoke_plugin(PluginInvokeRequest) returns (stream OutputChunk);
|
||||
}
|
||||
@@ -1,44 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Generated by the protocol buffer compiler. DO NOT EDIT!
|
||||
# NO CHECKED-IN PROTOBUF GENCODE
|
||||
# source: remote_plugin.proto
|
||||
# Protobuf Python Version: 6.31.1
|
||||
"""Generated protocol buffer code."""
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import descriptor_pool as _descriptor_pool
|
||||
from google.protobuf import runtime_version as _runtime_version
|
||||
from google.protobuf import symbol_database as _symbol_database
|
||||
from google.protobuf.internal import builder as _builder
|
||||
_runtime_version.ValidateProtobufRuntimeVersion(
|
||||
_runtime_version.Domain.PUBLIC,
|
||||
6,
|
||||
31,
|
||||
1,
|
||||
'',
|
||||
'remote_plugin.proto'
|
||||
)
|
||||
# @@protoc_insertion_point(imports)
|
||||
|
||||
_sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
|
||||
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x13remote_plugin.proto\x12\rconnpy_remote\"\x17\n\tIdRequest\x12\n\n\x02id\x18\x01 \x01(\t\"\x1f\n\x0eStringResponse\x12\r\n\x05value\x18\x01 \x01(\t\"6\n\x13PluginInvokeRequest\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x11\n\targs_json\x18\x02 \x01(\t\"-\n\x0bOutputChunk\x12\x0c\n\x04text\x18\x01 \x01(\t\x12\x10\n\x08is_error\x18\x02 \x01(\x08\x32\xb6\x01\n\x13RemotePluginService\x12L\n\x11get_plugin_source\x12\x18.connpy_remote.IdRequest\x1a\x1d.connpy_remote.StringResponse\x12Q\n\rinvoke_plugin\x12\".connpy_remote.PluginInvokeRequest\x1a\x1a.connpy_remote.OutputChunk0\x01\x62\x06proto3')
|
||||
|
||||
_globals = globals()
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'remote_plugin_pb2', _globals)
|
||||
if not _descriptor._USE_C_DESCRIPTORS:
|
||||
DESCRIPTOR._loaded_options = None
|
||||
_globals['_IDREQUEST']._serialized_start=38
|
||||
_globals['_IDREQUEST']._serialized_end=61
|
||||
_globals['_STRINGRESPONSE']._serialized_start=63
|
||||
_globals['_STRINGRESPONSE']._serialized_end=94
|
||||
_globals['_PLUGININVOKEREQUEST']._serialized_start=96
|
||||
_globals['_PLUGININVOKEREQUEST']._serialized_end=150
|
||||
_globals['_OUTPUTCHUNK']._serialized_start=152
|
||||
_globals['_OUTPUTCHUNK']._serialized_end=197
|
||||
_globals['_REMOTEPLUGINSERVICE']._serialized_start=200
|
||||
_globals['_REMOTEPLUGINSERVICE']._serialized_end=382
|
||||
# @@protoc_insertion_point(module_scope)
|
||||
@@ -1,140 +0,0 @@
|
||||
# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
|
||||
"""Client and server classes corresponding to protobuf-defined services."""
|
||||
import grpc
|
||||
import warnings
|
||||
|
||||
from . import remote_plugin_pb2 as remote__plugin__pb2
|
||||
|
||||
GRPC_GENERATED_VERSION = '1.80.0'
|
||||
GRPC_VERSION = grpc.__version__
|
||||
_version_not_supported = False
|
||||
|
||||
try:
|
||||
from grpc._utilities import first_version_is_lower
|
||||
_version_not_supported = first_version_is_lower(GRPC_VERSION, GRPC_GENERATED_VERSION)
|
||||
except ImportError:
|
||||
_version_not_supported = True
|
||||
|
||||
if _version_not_supported:
|
||||
raise RuntimeError(
|
||||
f'The grpc package installed is at version {GRPC_VERSION},'
|
||||
+ ' but the generated code in remote_plugin_pb2_grpc.py depends on'
|
||||
+ f' grpcio>={GRPC_GENERATED_VERSION}.'
|
||||
+ f' Please upgrade your grpc module to grpcio>={GRPC_GENERATED_VERSION}'
|
||||
+ f' or downgrade your generated code using grpcio-tools<={GRPC_VERSION}.'
|
||||
)
|
||||
|
||||
|
||||
class RemotePluginServiceStub(object):
|
||||
"""Missing associated documentation comment in .proto file."""
|
||||
|
||||
def __init__(self, channel):
|
||||
"""Constructor.
|
||||
|
||||
Args:
|
||||
channel: A grpc.Channel.
|
||||
"""
|
||||
self.get_plugin_source = channel.unary_unary(
|
||||
'/connpy_remote.RemotePluginService/get_plugin_source',
|
||||
request_serializer=remote__plugin__pb2.IdRequest.SerializeToString,
|
||||
response_deserializer=remote__plugin__pb2.StringResponse.FromString,
|
||||
_registered_method=True)
|
||||
self.invoke_plugin = channel.unary_stream(
|
||||
'/connpy_remote.RemotePluginService/invoke_plugin',
|
||||
request_serializer=remote__plugin__pb2.PluginInvokeRequest.SerializeToString,
|
||||
response_deserializer=remote__plugin__pb2.OutputChunk.FromString,
|
||||
_registered_method=True)
|
||||
|
||||
|
||||
class RemotePluginServiceServicer(object):
|
||||
"""Missing associated documentation comment in .proto file."""
|
||||
|
||||
def get_plugin_source(self, request, context):
|
||||
"""Missing associated documentation comment in .proto file."""
|
||||
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
|
||||
context.set_details('Method not implemented!')
|
||||
raise NotImplementedError('Method not implemented!')
|
||||
|
||||
def invoke_plugin(self, request, context):
|
||||
"""Missing associated documentation comment in .proto file."""
|
||||
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
|
||||
context.set_details('Method not implemented!')
|
||||
raise NotImplementedError('Method not implemented!')
|
||||
|
||||
|
||||
def add_RemotePluginServiceServicer_to_server(servicer, server):
|
||||
rpc_method_handlers = {
|
||||
'get_plugin_source': grpc.unary_unary_rpc_method_handler(
|
||||
servicer.get_plugin_source,
|
||||
request_deserializer=remote__plugin__pb2.IdRequest.FromString,
|
||||
response_serializer=remote__plugin__pb2.StringResponse.SerializeToString,
|
||||
),
|
||||
'invoke_plugin': grpc.unary_stream_rpc_method_handler(
|
||||
servicer.invoke_plugin,
|
||||
request_deserializer=remote__plugin__pb2.PluginInvokeRequest.FromString,
|
||||
response_serializer=remote__plugin__pb2.OutputChunk.SerializeToString,
|
||||
),
|
||||
}
|
||||
generic_handler = grpc.method_handlers_generic_handler(
|
||||
'connpy_remote.RemotePluginService', rpc_method_handlers)
|
||||
server.add_generic_rpc_handlers((generic_handler,))
|
||||
server.add_registered_method_handlers('connpy_remote.RemotePluginService', rpc_method_handlers)
|
||||
|
||||
|
||||
# This class is part of an EXPERIMENTAL API.
|
||||
class RemotePluginService(object):
|
||||
"""Missing associated documentation comment in .proto file."""
|
||||
|
||||
@staticmethod
|
||||
def get_plugin_source(request,
|
||||
target,
|
||||
options=(),
|
||||
channel_credentials=None,
|
||||
call_credentials=None,
|
||||
insecure=False,
|
||||
compression=None,
|
||||
wait_for_ready=None,
|
||||
timeout=None,
|
||||
metadata=None):
|
||||
return grpc.experimental.unary_unary(
|
||||
request,
|
||||
target,
|
||||
'/connpy_remote.RemotePluginService/get_plugin_source',
|
||||
remote__plugin__pb2.IdRequest.SerializeToString,
|
||||
remote__plugin__pb2.StringResponse.FromString,
|
||||
options,
|
||||
channel_credentials,
|
||||
insecure,
|
||||
call_credentials,
|
||||
compression,
|
||||
wait_for_ready,
|
||||
timeout,
|
||||
metadata,
|
||||
_registered_method=True)
|
||||
|
||||
@staticmethod
|
||||
def invoke_plugin(request,
|
||||
target,
|
||||
options=(),
|
||||
channel_credentials=None,
|
||||
call_credentials=None,
|
||||
insecure=False,
|
||||
compression=None,
|
||||
wait_for_ready=None,
|
||||
timeout=None,
|
||||
metadata=None):
|
||||
return grpc.experimental.unary_stream(
|
||||
request,
|
||||
target,
|
||||
'/connpy_remote.RemotePluginService/invoke_plugin',
|
||||
remote__plugin__pb2.PluginInvokeRequest.SerializeToString,
|
||||
remote__plugin__pb2.OutputChunk.FromString,
|
||||
options,
|
||||
channel_credentials,
|
||||
insecure,
|
||||
call_credentials,
|
||||
compression,
|
||||
wait_for_ready,
|
||||
timeout,
|
||||
metadata,
|
||||
_registered_method=True)
|
||||
@@ -1,837 +0,0 @@
|
||||
import grpc
|
||||
from concurrent import futures
|
||||
from google.protobuf.empty_pb2 import Empty
|
||||
import os
|
||||
import ctypes
|
||||
import threading
|
||||
|
||||
# Suppress harmless but noisy gRPC fork() warnings from pexpect child processes
|
||||
os.environ["GRPC_VERBOSITY"] = "NONE"
|
||||
os.environ["GRPC_ENABLE_FORK_SUPPORT"] = "0"
|
||||
from . import connpy_pb2, connpy_pb2_grpc, remote_plugin_pb2, remote_plugin_pb2_grpc
|
||||
import json
|
||||
from .utils import to_value, from_value, to_struct, from_struct
|
||||
from ..services.exceptions import ConnpyError
|
||||
from .. import printer
|
||||
|
||||
# Import local services
|
||||
from ..services.node_service import NodeService
|
||||
from ..services.profile_service import ProfileService
|
||||
from ..services.config_service import ConfigService
|
||||
from ..services.plugin_service import PluginService
|
||||
from ..services.ai_service import AIService
|
||||
from ..services.system_service import SystemService
|
||||
from ..services.execution_service import ExecutionService
|
||||
from ..services.import_export_service import ImportExportService
|
||||
|
||||
def handle_errors(func):
|
||||
import inspect
|
||||
if inspect.isgeneratorfunction(func):
|
||||
def wrapper(*args, **kwargs):
|
||||
try:
|
||||
for item in func(*args, **kwargs):
|
||||
yield item
|
||||
except ConnpyError as e:
|
||||
context = kwargs.get("context") or args[-1]
|
||||
context.abort(grpc.StatusCode.INTERNAL, str(e))
|
||||
except Exception as e:
|
||||
context = kwargs.get("context") or args[-1]
|
||||
context.abort(grpc.StatusCode.UNKNOWN, str(e))
|
||||
finally:
|
||||
printer.clear_thread_state()
|
||||
return wrapper
|
||||
else:
|
||||
def wrapper(*args, **kwargs):
|
||||
try:
|
||||
return func(*args, **kwargs)
|
||||
except ConnpyError as e:
|
||||
context = kwargs.get("context") or args[-1]
|
||||
context.abort(grpc.StatusCode.INTERNAL, str(e))
|
||||
except Exception as e:
|
||||
context = kwargs.get("context") or args[-1]
|
||||
context.abort(grpc.StatusCode.UNKNOWN, str(e))
|
||||
finally:
|
||||
printer.clear_thread_state()
|
||||
return wrapper
|
||||
|
||||
class NodeServicer(connpy_pb2_grpc.NodeServiceServicer):
|
||||
def __init__(self, config):
|
||||
self.service = NodeService(config)
|
||||
|
||||
@handle_errors
|
||||
def interact_node(self, request_iterator, context):
|
||||
import sys
|
||||
import os
|
||||
import asyncio
|
||||
from connpy.core import node
|
||||
from ..services.profile_service import ProfileService
|
||||
from connpy.tunnels import RemoteStream
|
||||
import queue
|
||||
import threading
|
||||
|
||||
# Fetch first setup packet
|
||||
try:
|
||||
first_req = next(request_iterator)
|
||||
except StopIteration:
|
||||
context.abort(grpc.StatusCode.INVALID_ARGUMENT, "No setup request received")
|
||||
|
||||
unique_id = first_req.id
|
||||
sftp = first_req.sftp
|
||||
debug = first_req.debug
|
||||
|
||||
if debug:
|
||||
printer.console.print(f"[debug][DEBUG][/debug] gRPC interact_node request for: [bold cyan]{unique_id}[/bold cyan]")
|
||||
|
||||
if first_req.connection_params_json:
|
||||
import json
|
||||
params = json.loads(first_req.connection_params_json)
|
||||
base_node_id = params.get("base_node")
|
||||
# Valid attributes that a node object accepts
|
||||
valid_attrs = ['host', 'options', 'logs', 'password', 'port', 'protocol', 'user', 'jumphost']
|
||||
|
||||
fallback_id = f"{unique_id}@remote"
|
||||
if unique_id == "dynamic" and params.get("host"):
|
||||
fallback_id = f"dynamic-{params.get('host')}@remote"
|
||||
|
||||
if base_node_id:
|
||||
# Look up the base node in config and use its full data
|
||||
nodes = self.service.config._getallnodes(base_node_id)
|
||||
if nodes:
|
||||
device = self.service.config.getitem(nodes[0])
|
||||
# Override device properties with any passed in params
|
||||
for attr in valid_attrs:
|
||||
if attr in params:
|
||||
device[attr] = params[attr]
|
||||
|
||||
if "tags" in params:
|
||||
device_tags = device.get("tags", {})
|
||||
if not isinstance(device_tags, dict):
|
||||
device_tags = {}
|
||||
device_tags.update(params["tags"])
|
||||
device["tags"] = device_tags
|
||||
|
||||
node_name = params.get("name", base_node_id)
|
||||
n = node(node_name, **device, config=self.service.config)
|
||||
else:
|
||||
# base_node not found, fall back to dynamic
|
||||
node_name = params.get("name", fallback_id)
|
||||
n = node(node_name, host=params.get("host", ""), config=self.service.config)
|
||||
for attr in valid_attrs:
|
||||
if attr in params:
|
||||
setattr(n, attr, params[attr])
|
||||
if "tags" in params:
|
||||
n.tags = params["tags"]
|
||||
else:
|
||||
node_name = params.get("name", fallback_id)
|
||||
n = node(node_name, host=params.get("host", ""), config=self.service.config)
|
||||
for attr in valid_attrs:
|
||||
if attr in params:
|
||||
setattr(n, attr, params[attr])
|
||||
if "tags" in params:
|
||||
n.tags = params["tags"]
|
||||
else:
|
||||
node_data = self.service.config.getitem(unique_id, extract=False)
|
||||
if not node_data:
|
||||
context.abort(grpc.StatusCode.NOT_FOUND, f"Node {unique_id} not found")
|
||||
profile_service = ProfileService(self.service.config)
|
||||
resolved_data = profile_service.resolve_node_data(node_data)
|
||||
n = node(unique_id, **resolved_data, config=self.service.config)
|
||||
if sftp:
|
||||
n.protocol = "sftp"
|
||||
|
||||
# Build a logger that captures debug messages as ANSI-colored bytes for the client
|
||||
debug_chunks = []
|
||||
if debug:
|
||||
from io import StringIO
|
||||
from rich.console import Console as RichConsole
|
||||
from ..printer import connpy_theme
|
||||
from .. import printer as _printer
|
||||
|
||||
def remote_logger(msg_type, message):
|
||||
buf = StringIO()
|
||||
c = RichConsole(file=buf, force_terminal=True, width=120, theme=connpy_theme)
|
||||
if msg_type == "debug":
|
||||
c.print(_printer._format_multiline("i", f"[DEBUG] {message}", style="info"))
|
||||
elif msg_type == "success":
|
||||
c.print(_printer._format_multiline("✓", message, style="success"))
|
||||
elif msg_type == "error":
|
||||
c.print(_printer._format_multiline("✗", message, style="error"))
|
||||
else:
|
||||
c.print(str(message))
|
||||
rendered = buf.getvalue()
|
||||
if rendered:
|
||||
# Raw TTY needs \r\n instead of \n
|
||||
rendered = rendered.replace('\n', '\r\n')
|
||||
debug_chunks.append(rendered.encode())
|
||||
else:
|
||||
remote_logger = None
|
||||
|
||||
connect = n._connect(debug=debug, logger=remote_logger)
|
||||
|
||||
# Send debug output to client before checking result (always show the command)
|
||||
for chunk in debug_chunks:
|
||||
yield connpy_pb2.InteractResponse(stdout_data=chunk)
|
||||
|
||||
if connect != True:
|
||||
yield connpy_pb2.InteractResponse(success=False, error_message=str(connect))
|
||||
return
|
||||
|
||||
# Signal successful connection to the client
|
||||
yield connpy_pb2.InteractResponse(success=True)
|
||||
|
||||
# Set initial window size if provided
|
||||
if first_req.cols > 0 and first_req.rows > 0:
|
||||
try:
|
||||
n.child.setwinsize(first_req.rows, first_req.cols)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
response_queue = queue.Queue()
|
||||
remote_stream = RemoteStream(request_iterator, response_queue)
|
||||
|
||||
def run_async_loop():
|
||||
try:
|
||||
n._setup_interact_environment(debug=debug, logger=None, async_mode=True)
|
||||
def resize_callback(rows, cols):
|
||||
try:
|
||||
n.child.setwinsize(rows, cols)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
asyncio.run(n._async_interact_loop(remote_stream, resize_callback))
|
||||
except Exception as e:
|
||||
pass
|
||||
finally:
|
||||
n._teardown_interact_environment()
|
||||
response_queue.put(None) # Signal EOF
|
||||
|
||||
t_loop = threading.Thread(target=run_async_loop, daemon=True)
|
||||
t_loop.start()
|
||||
|
||||
while True:
|
||||
data = response_queue.get()
|
||||
if data is None:
|
||||
if debug:
|
||||
printer.console.print(f"[debug][DEBUG][/debug] gRPC interact_node session closed for: [bold cyan]{unique_id}[/bold cyan]")
|
||||
break
|
||||
yield connpy_pb2.InteractResponse(stdout_data=data)
|
||||
@handle_errors
|
||||
def list_nodes(self, request, context):
|
||||
f = request.filter_str if request.filter_str else None
|
||||
fmt = request.format_str if request.format_str else None
|
||||
return connpy_pb2.ValueResponse(data=to_value(self.service.list_nodes(f, fmt)))
|
||||
|
||||
@handle_errors
|
||||
def list_folders(self, request, context):
|
||||
f = request.filter_str if request.filter_str else None
|
||||
return connpy_pb2.ValueResponse(data=to_value(self.service.list_folders(f)))
|
||||
|
||||
@handle_errors
|
||||
def get_node_details(self, request, context):
|
||||
return connpy_pb2.StructResponse(data=to_struct(self.service.get_node_details(request.id)))
|
||||
|
||||
@handle_errors
|
||||
def explode_unique(self, request, context):
|
||||
return connpy_pb2.ValueResponse(data=to_value(self.service.explode_unique(request.id)))
|
||||
|
||||
@handle_errors
|
||||
def validate_parent_folder(self, request, context):
|
||||
self.service.validate_parent_folder(request.id)
|
||||
return Empty()
|
||||
|
||||
@handle_errors
|
||||
def generate_cache(self, request, context):
|
||||
self.service.generate_cache()
|
||||
return Empty()
|
||||
|
||||
@handle_errors
|
||||
def add_node(self, request, context):
|
||||
self.service.add_node(request.id, from_struct(request.data), request.is_folder)
|
||||
self.service.generate_cache()
|
||||
return Empty()
|
||||
|
||||
@handle_errors
|
||||
def update_node(self, request, context):
|
||||
self.service.update_node(request.id, from_struct(request.data))
|
||||
self.service.generate_cache()
|
||||
return Empty()
|
||||
|
||||
@handle_errors
|
||||
def delete_node(self, request, context):
|
||||
self.service.delete_node(request.id, request.is_folder)
|
||||
self.service.generate_cache()
|
||||
return Empty()
|
||||
|
||||
@handle_errors
|
||||
def move_node(self, request, context):
|
||||
self.service.move_node(request.src_id, request.dst_id, request.copy)
|
||||
self.service.generate_cache()
|
||||
return Empty()
|
||||
|
||||
@handle_errors
|
||||
def bulk_add(self, request, context):
|
||||
self.service.bulk_add(list(request.ids), list(request.hosts), from_struct(request.common_data))
|
||||
self.service.generate_cache()
|
||||
return Empty()
|
||||
|
||||
@handle_errors
|
||||
def set_reserved_names(self, request, context):
|
||||
self.service.set_reserved_names(list(request.items))
|
||||
self.service.generate_cache()
|
||||
return Empty()
|
||||
|
||||
@handle_errors
|
||||
def full_replace(self, request, context):
|
||||
connections = from_struct(request.connections)
|
||||
profiles = from_struct(request.profiles)
|
||||
self.service.full_replace(connections, profiles)
|
||||
self.service.generate_cache()
|
||||
return Empty()
|
||||
|
||||
@handle_errors
|
||||
def get_inventory(self, request, context):
|
||||
data = self.service.get_inventory()
|
||||
return connpy_pb2.FullReplaceRequest(
|
||||
connections=to_struct(data["connections"]),
|
||||
profiles=to_struct(data["profiles"])
|
||||
)
|
||||
|
||||
class ProfileServicer(connpy_pb2_grpc.ProfileServiceServicer):
|
||||
def __init__(self, config):
|
||||
self.service = ProfileService(config)
|
||||
self.node_service = NodeService(config)
|
||||
|
||||
@handle_errors
|
||||
def list_profiles(self, request, context):
|
||||
f = request.filter_str if request.filter_str else None
|
||||
return connpy_pb2.ValueResponse(data=to_value(self.service.list_profiles(f)))
|
||||
|
||||
@handle_errors
|
||||
def get_profile(self, request, context):
|
||||
return connpy_pb2.StructResponse(data=to_struct(self.service.get_profile(request.name, request.resolve)))
|
||||
|
||||
@handle_errors
|
||||
def add_profile(self, request, context):
|
||||
self.service.add_profile(request.id, from_struct(request.data))
|
||||
self.node_service.generate_cache()
|
||||
return Empty()
|
||||
|
||||
@handle_errors
|
||||
def resolve_node_data(self, request, context):
|
||||
return connpy_pb2.StructResponse(data=to_struct(self.service.resolve_node_data(from_struct(request.data))))
|
||||
|
||||
@handle_errors
|
||||
def delete_profile(self, request, context):
|
||||
self.service.delete_profile(request.id)
|
||||
self.node_service.generate_cache()
|
||||
return Empty()
|
||||
|
||||
@handle_errors
|
||||
def update_profile(self, request, context):
|
||||
self.service.update_profile(request.id, from_struct(request.data))
|
||||
self.node_service.generate_cache()
|
||||
return Empty()
|
||||
|
||||
class ConfigServicer(connpy_pb2_grpc.ConfigServiceServicer):
|
||||
def __init__(self, config):
|
||||
self.service = ConfigService(config)
|
||||
|
||||
@handle_errors
|
||||
def get_settings(self, request, context):
|
||||
return connpy_pb2.StructResponse(data=to_struct(self.service.get_settings()))
|
||||
|
||||
@handle_errors
|
||||
def get_default_dir(self, request, context):
|
||||
return connpy_pb2.StringResponse(value=self.service.get_default_dir())
|
||||
|
||||
@handle_errors
|
||||
def set_config_folder(self, request, context):
|
||||
self.service.set_config_folder(request.value)
|
||||
return Empty()
|
||||
|
||||
@handle_errors
|
||||
def update_setting(self, request, context):
|
||||
self.service.update_setting(request.key, from_value(request.value))
|
||||
return Empty()
|
||||
|
||||
@handle_errors
|
||||
def encrypt_password(self, request, context):
|
||||
return connpy_pb2.StringResponse(value=self.service.encrypt_password(request.value))
|
||||
|
||||
@handle_errors
|
||||
def apply_theme_from_file(self, request, context):
|
||||
return connpy_pb2.StructResponse(data=to_struct(self.service.apply_theme_from_file(request.value)))
|
||||
|
||||
class PluginServicer(connpy_pb2_grpc.PluginServiceServicer, remote_plugin_pb2_grpc.RemotePluginServiceServicer):
|
||||
def __init__(self, config):
|
||||
self.service = PluginService(config)
|
||||
|
||||
@handle_errors
|
||||
def list_plugins(self, request, context):
|
||||
return connpy_pb2.ValueResponse(data=to_value(self.service.list_plugins()))
|
||||
|
||||
@handle_errors
|
||||
def add_plugin(self, request, context):
|
||||
if request.source_file.startswith("---CONTENT---\n"):
|
||||
content = request.source_file[len("---CONTENT---\n"):].encode()
|
||||
self.service.add_plugin_from_bytes(request.name, content, request.update)
|
||||
else:
|
||||
self.service.add_plugin(request.name, request.source_file, request.update)
|
||||
return Empty()
|
||||
|
||||
@handle_errors
|
||||
def delete_plugin(self, request, context):
|
||||
self.service.delete_plugin(request.id)
|
||||
return Empty()
|
||||
|
||||
@handle_errors
|
||||
def enable_plugin(self, request, context):
|
||||
self.service.enable_plugin(request.id)
|
||||
return Empty()
|
||||
|
||||
@handle_errors
|
||||
def disable_plugin(self, request, context):
|
||||
self.service.disable_plugin(request.id)
|
||||
return Empty()
|
||||
|
||||
@handle_errors
|
||||
def get_plugin_source(self, request, context):
|
||||
source = self.service.get_plugin_source(request.id)
|
||||
return remote_plugin_pb2.StringResponse(value=source)
|
||||
|
||||
@handle_errors
|
||||
def invoke_plugin(self, request, context):
|
||||
args_dict = json.loads(request.args_json)
|
||||
for chunk in self.service.invoke_plugin(request.name, args_dict):
|
||||
yield remote_plugin_pb2.OutputChunk(text=chunk)
|
||||
|
||||
class ExecutionServicer(connpy_pb2_grpc.ExecutionServiceServicer):
|
||||
def __init__(self, config):
|
||||
self.service = ExecutionService(config)
|
||||
|
||||
@handle_errors
|
||||
def run_commands(self, request, context):
|
||||
import queue
|
||||
import threading
|
||||
|
||||
nodes_filter = request.nodes[0] if len(request.nodes) == 1 else list(request.nodes)
|
||||
|
||||
q = queue.Queue()
|
||||
|
||||
def _on_complete(unique, output, status):
|
||||
q.put({"unique_id": unique, "output": output, "status": status})
|
||||
|
||||
def _worker():
|
||||
try:
|
||||
self.service.run_commands( nodes_filter=nodes_filter,
|
||||
commands=list(request.commands),
|
||||
folder=request.folder if request.folder else None,
|
||||
prompt=request.prompt if request.prompt else None,
|
||||
parallel=request.parallel,
|
||||
timeout=request.timeout if request.timeout > 0 else 10,
|
||||
variables=from_struct(request.vars) if request.HasField("vars") else None,
|
||||
on_node_complete=_on_complete,
|
||||
name=request.name if request.name else None
|
||||
)
|
||||
except Exception as e:
|
||||
# Optionally pass error to stream, but handle_errors decorator covers top-level.
|
||||
# However, thread exceptions won't reach context.abort directly.
|
||||
q.put(e)
|
||||
finally:
|
||||
q.put(None)
|
||||
|
||||
threading.Thread(target=_worker, daemon=True).start()
|
||||
|
||||
while True:
|
||||
item = q.get()
|
||||
if item is None:
|
||||
break
|
||||
if isinstance(item, Exception):
|
||||
raise item
|
||||
|
||||
yield connpy_pb2.NodeRunResult(
|
||||
unique_id=item["unique_id"],
|
||||
output=item["output"],
|
||||
status=item["status"]
|
||||
)
|
||||
|
||||
@handle_errors
|
||||
def test_commands(self, request, context):
|
||||
import queue
|
||||
import threading
|
||||
|
||||
nodes_filter = request.nodes[0] if len(request.nodes) == 1 else list(request.nodes)
|
||||
|
||||
q = queue.Queue()
|
||||
|
||||
def _on_complete(unique, node_output, node_status, node_result):
|
||||
q.put({"unique_id": unique, "output": node_output, "status": node_status, "result": node_result})
|
||||
|
||||
def _worker():
|
||||
try:
|
||||
self.service.test_commands(
|
||||
nodes_filter=nodes_filter,
|
||||
commands=list(request.commands),
|
||||
expected=list(request.expected),
|
||||
folder=request.folder if request.folder else None,
|
||||
prompt=request.prompt if request.prompt else None,
|
||||
parallel=request.parallel,
|
||||
timeout=request.timeout if request.timeout > 0 else 10,
|
||||
variables=from_struct(request.vars) if request.HasField("vars") else None,
|
||||
on_node_complete=_on_complete,
|
||||
name=request.name if request.name else None
|
||||
)
|
||||
except Exception as e:
|
||||
q.put(e)
|
||||
finally:
|
||||
q.put(None)
|
||||
|
||||
threading.Thread(target=_worker, daemon=True).start()
|
||||
|
||||
while True:
|
||||
item = q.get()
|
||||
if item is None:
|
||||
break
|
||||
if isinstance(item, Exception):
|
||||
raise item
|
||||
|
||||
res = connpy_pb2.NodeRunResult(
|
||||
unique_id=item["unique_id"],
|
||||
output=item["output"],
|
||||
status=item["status"]
|
||||
)
|
||||
if item["result"] is not None:
|
||||
res.test_result.CopyFrom(to_struct(item["result"]))
|
||||
yield res
|
||||
|
||||
@handle_errors
|
||||
def run_cli_script(self, request, context):
|
||||
res = self.service.run_cli_script(request.param1, request.param2, request.parallel)
|
||||
return connpy_pb2.StructResponse(data=to_struct(res))
|
||||
|
||||
@handle_errors
|
||||
def run_yaml_playbook(self, request, context):
|
||||
res = self.service.run_yaml_playbook(request.param1, request.parallel)
|
||||
return connpy_pb2.StructResponse(data=to_struct(res))
|
||||
|
||||
class ImportExportServicer(connpy_pb2_grpc.ImportExportServiceServicer):
|
||||
def __init__(self, config):
|
||||
self.service = ImportExportService(config)
|
||||
self.node_service = NodeService(config)
|
||||
|
||||
@handle_errors
|
||||
def export_to_file(self, request, context):
|
||||
self.service.export_to_file(request.file_path, list(request.folders) if request.folders else None)
|
||||
return Empty()
|
||||
|
||||
@handle_errors
|
||||
def import_from_file(self, request, context):
|
||||
if request.value.startswith("---YAML---\n"):
|
||||
import yaml
|
||||
content = request.value[len("---YAML---\n"):]
|
||||
data = yaml.load(content, Loader=yaml.FullLoader)
|
||||
self.service.import_from_dict(data)
|
||||
else:
|
||||
self.service.import_from_file(request.value)
|
||||
self.node_service.generate_cache()
|
||||
return Empty()
|
||||
|
||||
@handle_errors
|
||||
def set_reserved_names(self, request, context):
|
||||
self.service.set_reserved_names(list(request.items))
|
||||
self.node_service.generate_cache()
|
||||
return Empty()
|
||||
|
||||
class StatusBridge:
|
||||
def __init__(self, q, request_queue=None, is_web=False):
|
||||
self.q = q
|
||||
self.request_queue = request_queue
|
||||
self.on_interrupt = self._force_interrupt
|
||||
self.thread = None
|
||||
self.is_web = is_web
|
||||
|
||||
def _force_interrupt(self):
|
||||
"""Forcefully raise KeyboardInterrupt in the target thread."""
|
||||
if self.thread and self.thread.ident:
|
||||
# Standard Python trick to raise an exception in a specific thread
|
||||
import ctypes
|
||||
ctypes.pythonapi.PyThreadState_SetAsyncExc(
|
||||
ctypes.c_long(self.thread.ident),
|
||||
ctypes.py_object(KeyboardInterrupt)
|
||||
)
|
||||
|
||||
def update(self, msg):
|
||||
self.q.put(("status", msg))
|
||||
|
||||
def stop(self):
|
||||
pass
|
||||
|
||||
def print(self, *args, **kwargs):
|
||||
# Capture Rich output and send as debug message
|
||||
self._print_to_queue("debug", *args, **kwargs)
|
||||
|
||||
def print_important(self, *args, **kwargs):
|
||||
# Capture Rich output and send as important message (always show)
|
||||
self._print_to_queue("important", *args, **kwargs)
|
||||
|
||||
def _print_to_queue(self, msg_type, *args, **kwargs):
|
||||
from rich.console import Console
|
||||
from rich.panel import Panel
|
||||
from io import StringIO
|
||||
from ..printer import connpy_theme
|
||||
|
||||
processed_args = list(args)
|
||||
if self.is_web:
|
||||
# Remove Panels to avoid box characters on web, but preserve Title
|
||||
processed_args = []
|
||||
for arg in args:
|
||||
if isinstance(arg, Panel):
|
||||
# If it has a title, prepend it to the content to allow detection
|
||||
content = arg.renderable
|
||||
if arg.title:
|
||||
processed_args.append(f"{arg.title}\n")
|
||||
processed_args.append(content)
|
||||
else:
|
||||
processed_args.append(arg)
|
||||
|
||||
buf = StringIO()
|
||||
# force_terminal=False removes ANSI escape codes for Web
|
||||
c = Console(file=buf, force_terminal=not self.is_web, width=100, theme=connpy_theme)
|
||||
c.print(*processed_args, **kwargs)
|
||||
|
||||
text_content = buf.getvalue().strip()
|
||||
if text_content:
|
||||
self.q.put((msg_type, text_content))
|
||||
|
||||
def confirm(self, prompt, default="n"):
|
||||
"""Bridge confirmation to the gRPC client."""
|
||||
if not self.request_queue:
|
||||
return default
|
||||
|
||||
# Render markup to ANSI for the client
|
||||
from rich.console import Console
|
||||
from io import StringIO
|
||||
from ..printer import connpy_theme
|
||||
buf = StringIO()
|
||||
c = Console(file=buf, force_terminal=True, theme=connpy_theme)
|
||||
c.print(prompt, end="")
|
||||
ansi_prompt = buf.getvalue()
|
||||
|
||||
# Send confirmation request to client
|
||||
self.q.put(("confirm", ansi_prompt))
|
||||
|
||||
# Wait for the client to send back the answer via the request stream
|
||||
try:
|
||||
# Block until we get the next request from the client
|
||||
req = self.request_queue.get()
|
||||
if req and req.confirmation_answer:
|
||||
return req.confirmation_answer
|
||||
except Exception:
|
||||
pass
|
||||
return default
|
||||
|
||||
class AIServicer(connpy_pb2_grpc.AIServiceServicer):
|
||||
def __init__(self, config):
|
||||
self.service = AIService(config)
|
||||
|
||||
@handle_errors
|
||||
def ask(self, request_iterator, context):
|
||||
import queue
|
||||
import threading
|
||||
|
||||
chunk_queue = queue.Queue()
|
||||
request_queue = queue.Queue()
|
||||
bridge = None
|
||||
history = []
|
||||
is_web = False
|
||||
|
||||
# Dedicated event to signal AI thread to stop
|
||||
ai_thread = None
|
||||
agent_instance = None
|
||||
|
||||
def callback(chunk):
|
||||
chunk_queue.put(("text", chunk))
|
||||
|
||||
def run_ai_task(input_text, session_id, debug, overrides, trust):
|
||||
nonlocal history, bridge, agent_instance
|
||||
try:
|
||||
# Run the AI interaction (this blocks this specific thread)
|
||||
res = self.service.ask(
|
||||
input_text,
|
||||
chat_history=history if history else None,
|
||||
session_id=session_id,
|
||||
debug=debug,
|
||||
status=bridge,
|
||||
console=bridge,
|
||||
confirm_handler=bridge.confirm,
|
||||
chunk_callback=callback,
|
||||
trust=trust,
|
||||
**overrides
|
||||
)
|
||||
|
||||
# Update history for next message
|
||||
if "chat_history" in res:
|
||||
history = res["chat_history"]
|
||||
|
||||
# Send final chunk marker
|
||||
chunk_queue.put(("final_mark", res))
|
||||
except Exception as e:
|
||||
import traceback
|
||||
print(f"AI Task Error: {e}")
|
||||
traceback.print_exc()
|
||||
chunk_queue.put(("status", f"Error: {str(e)}"))
|
||||
|
||||
def request_listener():
|
||||
nonlocal bridge, is_web, ai_thread, agent_instance
|
||||
try:
|
||||
for req in request_iterator:
|
||||
if req.interrupt:
|
||||
if bridge and bridge.on_interrupt:
|
||||
bridge.on_interrupt()
|
||||
continue
|
||||
|
||||
if req.confirmation_answer:
|
||||
request_queue.put(req)
|
||||
continue
|
||||
|
||||
if req.input_text:
|
||||
is_web = "web" in (req.session_id or "").lower() or (req.session_id or "").lower().startswith("ws-")
|
||||
if not bridge:
|
||||
bridge = StatusBridge(chunk_queue, request_queue=request_queue, is_web=is_web)
|
||||
|
||||
overrides = {}
|
||||
if req.engineer_model: overrides["engineer_model"] = req.engineer_model
|
||||
if req.engineer_api_key: overrides["engineer_api_key"] = req.engineer_api_key
|
||||
|
||||
# Start AI in its own thread so we can keep listening for interrupts
|
||||
ai_thread = threading.Thread(
|
||||
target=run_ai_task,
|
||||
args=(req.input_text, req.session_id, req.debug, overrides, req.trust),
|
||||
daemon=True
|
||||
)
|
||||
ai_thread.start()
|
||||
except grpc.RpcError:
|
||||
pass
|
||||
except Exception as e:
|
||||
print(f"Request Listener Error: {e}")
|
||||
finally:
|
||||
# When client closes stream, send sentinel
|
||||
chunk_queue.put((None, None))
|
||||
|
||||
# Start listening for client requests/signals
|
||||
threading.Thread(target=request_listener, daemon=True).start()
|
||||
|
||||
# Main response loop (yields to gRPC)
|
||||
while True:
|
||||
item = chunk_queue.get()
|
||||
if item == (None, None):
|
||||
break
|
||||
|
||||
msg_type, val = item
|
||||
if msg_type == "text":
|
||||
yield connpy_pb2.AIResponse(text_chunk=val, is_final=False)
|
||||
elif msg_type == "status":
|
||||
if is_web and "is thinking" in val.lower(): continue
|
||||
clean_val = val.replace("[ai_status]", "").replace("[/ai_status]", "")
|
||||
yield connpy_pb2.AIResponse(status_update=clean_val, is_final=False)
|
||||
elif msg_type == "debug":
|
||||
yield connpy_pb2.AIResponse(debug_message=val, is_final=False)
|
||||
elif msg_type == "important":
|
||||
yield connpy_pb2.AIResponse(important_message=val, is_final=False)
|
||||
elif msg_type == "confirm":
|
||||
yield connpy_pb2.AIResponse(status_update=val, requires_confirmation=True, is_final=False)
|
||||
elif msg_type == "final_mark":
|
||||
yield connpy_pb2.AIResponse(is_final=True, full_result=to_struct(val))
|
||||
|
||||
@handle_errors
|
||||
def confirm(self, request, context):
|
||||
res = self.service.confirm(request.value)
|
||||
return connpy_pb2.BoolResponse(value=res)
|
||||
|
||||
@handle_errors
|
||||
def list_sessions(self, request, context):
|
||||
return connpy_pb2.ValueResponse(data=to_value(self.service.list_sessions()))
|
||||
|
||||
@handle_errors
|
||||
def delete_session(self, request, context):
|
||||
self.service.delete_session(request.value)
|
||||
return Empty()
|
||||
|
||||
@handle_errors
|
||||
def configure_provider(self, request, context):
|
||||
self.service.configure_provider(request.provider, request.model, request.api_key)
|
||||
return Empty()
|
||||
|
||||
@handle_errors
|
||||
def load_session_data(self, request, context):
|
||||
return connpy_pb2.StructResponse(data=to_struct(self.service.load_session_data(request.value)))
|
||||
|
||||
class SystemServicer(connpy_pb2_grpc.SystemServiceServicer):
|
||||
def __init__(self, config):
|
||||
self.service = SystemService(config)
|
||||
|
||||
@handle_errors
|
||||
def start_api(self, request, context):
|
||||
self.service.start_api(request.value)
|
||||
return Empty()
|
||||
|
||||
@handle_errors
|
||||
def debug_api(self, request, context):
|
||||
self.service.debug_api(request.value)
|
||||
return Empty()
|
||||
|
||||
@handle_errors
|
||||
def stop_api(self, request, context):
|
||||
self.service.stop_api()
|
||||
return Empty()
|
||||
|
||||
@handle_errors
|
||||
def restart_api(self, request, context):
|
||||
self.service.restart_api(request.value)
|
||||
return Empty()
|
||||
|
||||
@handle_errors
|
||||
def get_api_status(self, request, context):
|
||||
return connpy_pb2.BoolResponse(value=self.service.get_api_status())
|
||||
|
||||
class LoggingInterceptor(grpc.ServerInterceptor):
|
||||
def __init__(self):
|
||||
from rich.console import Console
|
||||
from ..printer import connpy_theme, get_original_stdout
|
||||
self.console = Console(theme=connpy_theme, file=get_original_stdout())
|
||||
|
||||
def intercept_service(self, continuation, handler_call_details):
|
||||
import time
|
||||
method = handler_call_details.method
|
||||
self.console.print(f"[debug][DEBUG][/debug] gRPC Incoming Request: [bold cyan]{method}[/bold cyan]")
|
||||
|
||||
start_time = time.time()
|
||||
try:
|
||||
result = continuation(handler_call_details)
|
||||
except Exception as e:
|
||||
self.console.print(f"[debug][DEBUG][/debug] [bold red]ERROR[/bold red] in {method}: {e}")
|
||||
raise e
|
||||
finally:
|
||||
duration = (time.time() - start_time) * 1000
|
||||
self.console.print(f"[debug][DEBUG][/debug] Completed [bold cyan]{method}[/bold cyan] in {duration:.2f}ms")
|
||||
|
||||
return result
|
||||
|
||||
def serve(config, port=8048, debug=False):
|
||||
interceptors = [LoggingInterceptor()] if debug else []
|
||||
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10), interceptors=interceptors)
|
||||
|
||||
connpy_pb2_grpc.add_NodeServiceServicer_to_server(NodeServicer(config), server)
|
||||
connpy_pb2_grpc.add_ProfileServiceServicer_to_server(ProfileServicer(config), server)
|
||||
connpy_pb2_grpc.add_ConfigServiceServicer_to_server(ConfigServicer(config), server)
|
||||
plugin_servicer = PluginServicer(config)
|
||||
connpy_pb2_grpc.add_PluginServiceServicer_to_server(plugin_servicer, server)
|
||||
remote_plugin_pb2_grpc.add_RemotePluginServiceServicer_to_server(plugin_servicer, server)
|
||||
connpy_pb2_grpc.add_ExecutionServiceServicer_to_server(ExecutionServicer(config), server)
|
||||
connpy_pb2_grpc.add_ImportExportServiceServicer_to_server(ImportExportServicer(config), server)
|
||||
connpy_pb2_grpc.add_AIServiceServicer_to_server(AIServicer(config), server)
|
||||
connpy_pb2_grpc.add_SystemServiceServicer_to_server(SystemServicer(config), server)
|
||||
|
||||
server.add_insecure_port(f'[::]:{port}')
|
||||
server.start()
|
||||
return server
|
||||
@@ -1,769 +0,0 @@
|
||||
import grpc
|
||||
import queue
|
||||
import threading
|
||||
from functools import wraps
|
||||
from google.protobuf.empty_pb2 import Empty
|
||||
|
||||
from . import connpy_pb2, connpy_pb2_grpc, remote_plugin_pb2, remote_plugin_pb2_grpc
|
||||
from .utils import to_value, from_value, to_struct, from_struct
|
||||
from ..services.exceptions import ConnpyError
|
||||
from ..hooks import MethodHook
|
||||
from .. import printer
|
||||
|
||||
def handle_errors(func):
|
||||
@wraps(func)
|
||||
def wrapper(*args, **kwargs):
|
||||
try:
|
||||
return func(*args, **kwargs)
|
||||
except grpc.RpcError as e:
|
||||
# Re-raise gRPC errors as native ConnpyError to keep CLI handlers agnostic
|
||||
details = e.details()
|
||||
|
||||
# Identify the host if available on the instance
|
||||
instance = args[0] if args else None
|
||||
host = getattr(instance, "remote_host", "remote host")
|
||||
|
||||
# Make common gRPC errors more readable
|
||||
if "failed to connect to all addresses" in details:
|
||||
simplified = f"Failed to connect to remote host at {host} (Connection refused)"
|
||||
elif "Method not found" in details:
|
||||
simplified = f"Remote server at {host} is using an incompatible version"
|
||||
elif "Deadline Exceeded" in details:
|
||||
simplified = f"Request to {host} timed out"
|
||||
else:
|
||||
simplified = details
|
||||
|
||||
raise ConnpyError(simplified)
|
||||
return wrapper
|
||||
class NodeStub:
|
||||
def __init__(self, channel, remote_host, config=None):
|
||||
self.stub = connpy_pb2_grpc.NodeServiceStub(channel)
|
||||
self.remote_host = remote_host
|
||||
self.config = config
|
||||
|
||||
@handle_errors
|
||||
def connect_node(self, unique_id, sftp=False, debug=False, logger=None):
|
||||
import sys
|
||||
import select
|
||||
import tty
|
||||
import termios
|
||||
import os
|
||||
import threading
|
||||
|
||||
def request_generator():
|
||||
cols, rows = 80, 24
|
||||
try:
|
||||
size = os.get_terminal_size()
|
||||
cols, rows = size.columns, size.lines
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
yield connpy_pb2.InteractRequest(
|
||||
id=unique_id, sftp=sftp, debug=debug, cols=cols, rows=rows
|
||||
)
|
||||
|
||||
while True:
|
||||
r, _, _ = select.select([sys.stdin.fileno()], [], [])
|
||||
if r:
|
||||
try:
|
||||
data = os.read(sys.stdin.fileno(), 1024)
|
||||
if not data:
|
||||
break
|
||||
yield connpy_pb2.InteractRequest(stdin_data=data)
|
||||
except OSError:
|
||||
break
|
||||
|
||||
# Fetch node details for the connection message
|
||||
try:
|
||||
node_details = self.get_node_details(unique_id)
|
||||
host = node_details.get("host", "unknown")
|
||||
port = str(node_details.get("port", ""))
|
||||
protocol = "sftp" if sftp else node_details.get("protocol", "ssh")
|
||||
port_str = f":{port}" if port and protocol not in ["ssm", "kubectl", "docker"] else ""
|
||||
conn_msg = f"Connected to {unique_id} at {host}{port_str} via: {protocol}"
|
||||
except Exception:
|
||||
conn_msg = f"Connected to {unique_id}"
|
||||
|
||||
old_tty = termios.tcgetattr(sys.stdin)
|
||||
try:
|
||||
import time
|
||||
tty.setraw(sys.stdin.fileno())
|
||||
response_iterator = self.stub.interact_node(request_generator())
|
||||
|
||||
# First phase: Wait for connection status, print early data
|
||||
try:
|
||||
for res in response_iterator:
|
||||
if res.stdout_data:
|
||||
data = res.stdout_data
|
||||
if debug:
|
||||
data = data.replace(b'\x1b[H\x1b[2J', b'').replace(b'\x1bc', b'').replace(b'\x1b[3J', b'')
|
||||
os.write(sys.stdout.fileno(), data)
|
||||
|
||||
if res.success:
|
||||
# Connection established on server, show success message
|
||||
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_tty)
|
||||
printer.success(conn_msg)
|
||||
tty.setraw(sys.stdin.fileno())
|
||||
break
|
||||
|
||||
if res.error_message:
|
||||
# Connection failed on server
|
||||
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_tty)
|
||||
printer.error(f"Connection failed: {res.error_message}")
|
||||
return
|
||||
except StopIteration:
|
||||
return
|
||||
|
||||
# Second phase: Stream active session
|
||||
# Clear screen filter is only applied before success (Phase 1).
|
||||
# Once the user has a prompt, Ctrl+L must work normally.
|
||||
for res in response_iterator:
|
||||
if res.stdout_data:
|
||||
os.write(sys.stdout.fileno(), res.stdout_data)
|
||||
finally:
|
||||
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_tty)
|
||||
|
||||
@handle_errors
|
||||
def connect_dynamic(self, connection_params, debug=False):
|
||||
import sys
|
||||
import select
|
||||
import tty
|
||||
import termios
|
||||
import os
|
||||
import json
|
||||
|
||||
params_json = json.dumps(connection_params)
|
||||
|
||||
def request_generator():
|
||||
cols, rows = 80, 24
|
||||
try:
|
||||
size = os.get_terminal_size()
|
||||
cols, rows = size.columns, size.lines
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
yield connpy_pb2.InteractRequest(
|
||||
id="dynamic", debug=debug, cols=cols, rows=rows,
|
||||
connection_params_json=params_json
|
||||
)
|
||||
|
||||
while True:
|
||||
r, _, _ = select.select([sys.stdin.fileno()], [], [])
|
||||
if r:
|
||||
try:
|
||||
data = os.read(sys.stdin.fileno(), 1024)
|
||||
if not data:
|
||||
break
|
||||
yield connpy_pb2.InteractRequest(stdin_data=data)
|
||||
except OSError:
|
||||
break
|
||||
|
||||
# Prepare connection message
|
||||
try:
|
||||
node_name = connection_params.get("name", "dynamic@remote")
|
||||
host = connection_params.get("host", "dynamic")
|
||||
port = str(connection_params.get("port", ""))
|
||||
protocol = connection_params.get("protocol", "ssh")
|
||||
port_str = f":{port}" if port and protocol not in ["ssm", "kubectl", "docker"] else ""
|
||||
conn_msg = f"Connected to {node_name} at {host}{port_str} via: {protocol}"
|
||||
except Exception:
|
||||
node_name = connection_params.get("name", "dynamic@remote") if isinstance(connection_params, dict) else "dynamic@remote"
|
||||
conn_msg = f"Connected to {node_name}"
|
||||
|
||||
old_tty = termios.tcgetattr(sys.stdin)
|
||||
try:
|
||||
import time
|
||||
tty.setraw(sys.stdin.fileno())
|
||||
response_iterator = self.stub.interact_node(request_generator())
|
||||
|
||||
# First phase: Wait for connection status, print early data
|
||||
try:
|
||||
for res in response_iterator:
|
||||
if res.stdout_data:
|
||||
data = res.stdout_data
|
||||
if debug:
|
||||
data = data.replace(b'\x1b[H\x1b[2J', b'').replace(b'\x1bc', b'').replace(b'\x1b[3J', b'')
|
||||
os.write(sys.stdout.fileno(), data)
|
||||
|
||||
if res.success:
|
||||
# Connection established on server, show success message
|
||||
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_tty)
|
||||
printer.success(conn_msg)
|
||||
tty.setraw(sys.stdin.fileno())
|
||||
break
|
||||
|
||||
if res.error_message:
|
||||
# Connection failed on server
|
||||
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_tty)
|
||||
printer.error(f"Connection failed: {res.error_message}")
|
||||
return
|
||||
except StopIteration:
|
||||
return
|
||||
|
||||
# Second phase: Stream active session
|
||||
# Clear screen filter is only applied before success (Phase 1).
|
||||
# Once the user has a prompt, Ctrl+L must work normally.
|
||||
for res in response_iterator:
|
||||
if res.stdout_data:
|
||||
os.write(sys.stdout.fileno(), res.stdout_data)
|
||||
finally:
|
||||
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_tty)
|
||||
|
||||
@MethodHook
|
||||
@handle_errors
|
||||
def list_nodes(self, filter_str=None, format_str=None):
|
||||
req = connpy_pb2.FilterRequest(filter_str=filter_str or "", format_str=format_str or "")
|
||||
return from_value(self.stub.list_nodes(req).data) or []
|
||||
|
||||
@MethodHook
|
||||
@handle_errors
|
||||
def list_folders(self, filter_str=None):
|
||||
req = connpy_pb2.FilterRequest(filter_str=filter_str or "")
|
||||
return from_value(self.stub.list_folders(req).data) or []
|
||||
|
||||
@handle_errors
|
||||
def get_node_details(self, unique_id):
|
||||
return from_struct(self.stub.get_node_details(connpy_pb2.IdRequest(id=unique_id)).data)
|
||||
|
||||
@handle_errors
|
||||
def explode_unique(self, unique_id):
|
||||
return from_value(self.stub.explode_unique(connpy_pb2.IdRequest(id=unique_id)).data)
|
||||
|
||||
@handle_errors
|
||||
def validate_parent_folder(self, unique_id):
|
||||
self.stub.validate_parent_folder(connpy_pb2.IdRequest(id=unique_id))
|
||||
|
||||
@handle_errors
|
||||
def generate_cache(self, nodes=None, folders=None, profiles=None):
|
||||
# 1. Update remote cache on server
|
||||
self.stub.generate_cache(Empty())
|
||||
|
||||
# 2. Update local fzf/text cache files
|
||||
# If no data provided, we fetch it all from remote to sync local files
|
||||
if nodes is None and folders is None and profiles is None:
|
||||
nodes = self.list_nodes()
|
||||
folders = self.list_folders()
|
||||
# We don't have direct access to ProfileStub here, but usually
|
||||
# node cache is what matters for fzf. We'll fetch profiles if we can.
|
||||
# For now, let's sync what we have.
|
||||
|
||||
if nodes is not None or folders is not None or profiles is not None:
|
||||
self.config._generate_nodes_cache(nodes=nodes, folders=folders, profiles=profiles)
|
||||
|
||||
def _trigger_local_cache_sync(self):
|
||||
"""Helper to fetch remote data and update local fzf cache files after a change."""
|
||||
try:
|
||||
nodes = self.list_nodes()
|
||||
folders = self.list_folders()
|
||||
self.generate_cache(nodes=nodes, folders=folders)
|
||||
except Exception:
|
||||
# Failure to sync cache shouldn't break the main operation's success feedback
|
||||
pass
|
||||
|
||||
@handle_errors
|
||||
def add_node(self, unique_id, data, is_folder=False):
|
||||
req = connpy_pb2.NodeRequest(id=unique_id, data=to_struct(data), is_folder=is_folder)
|
||||
self.stub.add_node(req)
|
||||
self._trigger_local_cache_sync()
|
||||
|
||||
@handle_errors
|
||||
def update_node(self, unique_id, data):
|
||||
req = connpy_pb2.NodeRequest(id=unique_id, data=to_struct(data), is_folder=False)
|
||||
self.stub.update_node(req)
|
||||
self._trigger_local_cache_sync()
|
||||
|
||||
@handle_errors
|
||||
def delete_node(self, unique_id, is_folder=False):
|
||||
req = connpy_pb2.DeleteRequest(id=unique_id, is_folder=is_folder)
|
||||
self.stub.delete_node(req)
|
||||
self._trigger_local_cache_sync()
|
||||
|
||||
@handle_errors
|
||||
def move_node(self, src_id, dst_id, copy=False):
|
||||
req = connpy_pb2.MoveRequest(src_id=src_id, dst_id=dst_id, copy=copy)
|
||||
self.stub.move_node(req)
|
||||
self._trigger_local_cache_sync()
|
||||
|
||||
@handle_errors
|
||||
def bulk_add(self, ids, hosts, common_data):
|
||||
req = connpy_pb2.BulkRequest(ids=ids, hosts=hosts, common_data=to_struct(common_data))
|
||||
self.stub.bulk_add(req)
|
||||
self._trigger_local_cache_sync()
|
||||
|
||||
@handle_errors
|
||||
def set_reserved_names(self, names):
|
||||
self.stub.set_reserved_names(connpy_pb2.ListRequest(items=names))
|
||||
self._trigger_local_cache_sync()
|
||||
|
||||
@handle_errors
|
||||
def full_replace(self, connections, profiles):
|
||||
req = connpy_pb2.FullReplaceRequest(
|
||||
connections=to_struct(connections),
|
||||
profiles=to_struct(profiles)
|
||||
)
|
||||
self.stub.full_replace(req)
|
||||
self._trigger_local_cache_sync()
|
||||
|
||||
@handle_errors
|
||||
def get_inventory(self):
|
||||
resp = self.stub.get_inventory(Empty())
|
||||
return {
|
||||
"connections": from_struct(resp.connections),
|
||||
"profiles": from_struct(resp.profiles)
|
||||
}
|
||||
|
||||
|
||||
class ProfileStub:
|
||||
def __init__(self, channel, remote_host, node_stub=None):
|
||||
self.stub = connpy_pb2_grpc.ProfileServiceStub(channel)
|
||||
self.remote_host = remote_host
|
||||
self.node_stub = node_stub
|
||||
|
||||
@handle_errors
|
||||
def list_profiles(self, filter_str=None):
|
||||
req = connpy_pb2.FilterRequest(filter_str=filter_str or "")
|
||||
return from_value(self.stub.list_profiles(req).data) or []
|
||||
|
||||
@handle_errors
|
||||
def get_profile(self, name, resolve=True):
|
||||
req = connpy_pb2.ProfileRequest(name=name, resolve=resolve)
|
||||
return from_struct(self.stub.get_profile(req).data)
|
||||
|
||||
@handle_errors
|
||||
def add_profile(self, name, data):
|
||||
req = connpy_pb2.NodeRequest(id=name, data=to_struct(data))
|
||||
self.stub.add_profile(req)
|
||||
if self.node_stub:
|
||||
self.node_stub._trigger_local_cache_sync()
|
||||
|
||||
@handle_errors
|
||||
def resolve_node_data(self, node_data):
|
||||
req = connpy_pb2.StructRequest(data=to_struct(node_data))
|
||||
return from_struct(self.stub.resolve_node_data(req).data)
|
||||
|
||||
@handle_errors
|
||||
def delete_profile(self, name):
|
||||
req = connpy_pb2.IdRequest(id=name)
|
||||
self.stub.delete_profile(req)
|
||||
if self.node_stub:
|
||||
self.node_stub._trigger_local_cache_sync()
|
||||
|
||||
@handle_errors
|
||||
def update_profile(self, name, data):
|
||||
req = connpy_pb2.NodeRequest(id=name, data=to_struct(data))
|
||||
self.stub.update_profile(req)
|
||||
if self.node_stub:
|
||||
self.node_stub._trigger_local_cache_sync()
|
||||
|
||||
class ConfigStub:
|
||||
def __init__(self, channel, remote_host):
|
||||
self.stub = connpy_pb2_grpc.ConfigServiceStub(channel)
|
||||
self.remote_host = remote_host
|
||||
|
||||
@handle_errors
|
||||
def get_settings(self):
|
||||
return from_struct(self.stub.get_settings(Empty()).data)
|
||||
|
||||
@handle_errors
|
||||
def update_setting(self, key, value):
|
||||
self.stub.update_setting(connpy_pb2.UpdateRequest(key=key, value=to_value(value)))
|
||||
|
||||
@handle_errors
|
||||
def get_default_dir(self):
|
||||
return self.stub.get_default_dir(Empty()).value
|
||||
|
||||
@handle_errors
|
||||
def set_config_folder(self, folder):
|
||||
self.stub.set_config_folder(connpy_pb2.StringRequest(value=folder))
|
||||
|
||||
@handle_errors
|
||||
def encrypt_password(self, password):
|
||||
return self.stub.encrypt_password(connpy_pb2.StringRequest(value=password)).value
|
||||
|
||||
class PluginStub:
|
||||
def __init__(self, channel, remote_host):
|
||||
self.stub = connpy_pb2_grpc.PluginServiceStub(channel)
|
||||
self.remote_stub = remote_plugin_pb2_grpc.RemotePluginServiceStub(channel)
|
||||
self.remote_host = remote_host
|
||||
|
||||
@handle_errors
|
||||
def list_plugins(self):
|
||||
return from_value(self.stub.list_plugins(Empty()).data)
|
||||
|
||||
@handle_errors
|
||||
def add_plugin(self, name, source_file, update=False):
|
||||
# Read the local file content to send it to the server
|
||||
with open(source_file, "r") as f:
|
||||
content = f.read()
|
||||
|
||||
# Use source_file as a marker for "content-inside"
|
||||
marker_content = f"---CONTENT---\n{content}"
|
||||
req = connpy_pb2.PluginRequest(name=name, source_file=marker_content, update=update)
|
||||
self.stub.add_plugin(req)
|
||||
|
||||
@handle_errors
|
||||
def delete_plugin(self, name):
|
||||
self.stub.delete_plugin(connpy_pb2.IdRequest(id=name))
|
||||
|
||||
@handle_errors
|
||||
def enable_plugin(self, name):
|
||||
self.stub.enable_plugin(connpy_pb2.IdRequest(id=name))
|
||||
|
||||
@handle_errors
|
||||
def disable_plugin(self, name):
|
||||
self.stub.disable_plugin(connpy_pb2.IdRequest(id=name))
|
||||
|
||||
@handle_errors
|
||||
def get_plugin_source(self, name):
|
||||
resp = self.remote_stub.get_plugin_source(remote_plugin_pb2.IdRequest(id=name))
|
||||
return resp.value
|
||||
|
||||
@handle_errors
|
||||
def invoke_plugin(self, name, args_namespace):
|
||||
import json
|
||||
args_dict = {k: v for k, v in vars(args_namespace).items()
|
||||
if isinstance(v, (str, int, float, bool, list, type(None)))}
|
||||
if hasattr(args_namespace, "func") and hasattr(args_namespace.func, "__name__"):
|
||||
args_dict["__func_name__"] = args_namespace.func.__name__
|
||||
|
||||
req = remote_plugin_pb2.PluginInvokeRequest(name=name, args_json=json.dumps(args_dict))
|
||||
for chunk in self.remote_stub.invoke_plugin(req):
|
||||
yield chunk.text
|
||||
|
||||
class ExecutionStub:
|
||||
def __init__(self, channel, remote_host):
|
||||
self.stub = connpy_pb2_grpc.ExecutionServiceStub(channel)
|
||||
self.remote_host = remote_host
|
||||
|
||||
@handle_errors
|
||||
def run_commands(self, nodes_filter, commands, variables=None, parallel=10, timeout=10, folder=None, prompt=None, **kwargs):
|
||||
nodes_list = [nodes_filter] if isinstance(nodes_filter, str) else list(nodes_filter)
|
||||
req = connpy_pb2.RunRequest(
|
||||
nodes=nodes_list,
|
||||
commands=commands,
|
||||
folder=folder or "",
|
||||
prompt=prompt or "",
|
||||
parallel=parallel,
|
||||
timeout=timeout,
|
||||
name=kwargs.get("name", "")
|
||||
)
|
||||
if variables is not None:
|
||||
req.vars.CopyFrom(to_struct(variables))
|
||||
|
||||
final_results = {}
|
||||
on_complete = kwargs.get("on_node_complete")
|
||||
|
||||
for response in self.stub.run_commands(req):
|
||||
if on_complete:
|
||||
on_complete(response.unique_id, response.output, response.status)
|
||||
final_results[response.unique_id] = {
|
||||
"output": response.output,
|
||||
"status": response.status
|
||||
}
|
||||
|
||||
return final_results
|
||||
|
||||
@handle_errors
|
||||
def test_commands(self, nodes_filter, commands, expected, variables=None, parallel=10, timeout=10, prompt=None, **kwargs):
|
||||
nodes_list = [nodes_filter] if isinstance(nodes_filter, str) else list(nodes_filter)
|
||||
req = connpy_pb2.TestRequest(
|
||||
nodes=nodes_list,
|
||||
commands=commands,
|
||||
expected=expected if isinstance(expected, list) else [expected],
|
||||
folder=kwargs.get("folder", ""),
|
||||
prompt=prompt or "",
|
||||
parallel=parallel,
|
||||
timeout=timeout,
|
||||
name=kwargs.get("name", "")
|
||||
)
|
||||
if variables is not None:
|
||||
req.vars.CopyFrom(to_struct(variables))
|
||||
|
||||
final_results = {}
|
||||
on_complete = kwargs.get("on_node_complete")
|
||||
|
||||
for response in self.stub.test_commands(req):
|
||||
result_dict = from_struct(response.test_result) if response.HasField("test_result") else {}
|
||||
if on_complete:
|
||||
on_complete(response.unique_id, response.output, response.status, result_dict)
|
||||
final_results[response.unique_id] = result_dict
|
||||
|
||||
return final_results
|
||||
|
||||
@handle_errors
|
||||
def run_cli_script(self, nodes_filter, script_path, parallel=10):
|
||||
req = connpy_pb2.ScriptRequest(param1=nodes_filter, param2=script_path, parallel=parallel)
|
||||
return from_struct(self.stub.run_cli_script(req).data)
|
||||
|
||||
@handle_errors
|
||||
def run_yaml_playbook(self, playbook_path, parallel=10):
|
||||
req = connpy_pb2.ScriptRequest(param1=playbook_path, parallel=parallel)
|
||||
return from_struct(self.stub.run_yaml_playbook(req).data)
|
||||
|
||||
class ImportExportStub:
|
||||
def __init__(self, channel, remote_host):
|
||||
self.stub = connpy_pb2_grpc.ImportExportServiceStub(channel)
|
||||
self.remote_host = remote_host
|
||||
|
||||
@handle_errors
|
||||
def export_to_file(self, file_path, folders=None):
|
||||
req = connpy_pb2.ExportRequest(file_path=file_path, folders=folders or [])
|
||||
self.stub.export_to_file(req)
|
||||
|
||||
@handle_errors
|
||||
def import_from_file(self, file_path):
|
||||
with open(file_path, "r") as f:
|
||||
content = f.read()
|
||||
# Marker to tell the server this is content, not a path
|
||||
marker_content = f"---YAML---\n{content}"
|
||||
self.stub.import_from_file(connpy_pb2.StringRequest(value=marker_content))
|
||||
|
||||
@handle_errors
|
||||
def set_reserved_names(self, names):
|
||||
self.stub.set_reserved_names(connpy_pb2.ListRequest(items=names))
|
||||
|
||||
class AIStub:
|
||||
def __init__(self, channel, remote_host):
|
||||
self.stub = connpy_pb2_grpc.AIServiceStub(channel)
|
||||
self.remote_host = remote_host
|
||||
|
||||
@handle_errors
|
||||
def ask(self, input_text, dryrun=False, chat_history=None, session_id=None, debug=False, status=None, **overrides):
|
||||
import queue
|
||||
from rich.prompt import Prompt
|
||||
from rich.text import Text
|
||||
from rich.live import Live
|
||||
from rich.panel import Panel
|
||||
from rich.markdown import Markdown
|
||||
|
||||
req_queue = queue.Queue()
|
||||
|
||||
initial_req = connpy_pb2.AskRequest(
|
||||
input_text=input_text,
|
||||
dryrun=dryrun,
|
||||
session_id=session_id or "",
|
||||
debug=debug,
|
||||
engineer_model=overrides.get("engineer_model", ""),
|
||||
engineer_api_key=overrides.get("engineer_api_key", ""),
|
||||
architect_model=overrides.get("architect_model", ""),
|
||||
architect_api_key=overrides.get("architect_api_key", ""),
|
||||
trust=overrides.get("trust", False)
|
||||
)
|
||||
if chat_history is not None:
|
||||
initial_req.chat_history.CopyFrom(to_value(chat_history))
|
||||
|
||||
req_queue.put(initial_req)
|
||||
|
||||
def request_generator():
|
||||
while True:
|
||||
req = req_queue.get()
|
||||
if req is None: break
|
||||
yield req
|
||||
|
||||
responses = self.stub.ask(request_generator())
|
||||
|
||||
full_content = ""
|
||||
live_display = None
|
||||
final_result = {"response": "", "chat_history": []}
|
||||
|
||||
# Background thread to pull responses from gRPC into a local queue
|
||||
# This prevents KeyboardInterrupt from corrupting the gRPC iterator state
|
||||
response_queue = queue.Queue()
|
||||
|
||||
def pull_responses():
|
||||
try:
|
||||
for response in responses:
|
||||
response_queue.put(("data", response))
|
||||
except Exception as e:
|
||||
response_queue.put(("error", e))
|
||||
finally:
|
||||
response_queue.put((None, None))
|
||||
|
||||
threading.Thread(target=pull_responses, daemon=True).start()
|
||||
|
||||
try:
|
||||
while True:
|
||||
try:
|
||||
# BLOCKING GET from local queue (interruptible by signal)
|
||||
msg_type, response = response_queue.get()
|
||||
except KeyboardInterrupt:
|
||||
# Signal interruption to the server
|
||||
if status:
|
||||
status.update("[error]Interrupted! Closing pending tasks...")
|
||||
|
||||
# Send the interrupt signal to the server
|
||||
req_queue.put(connpy_pb2.AskRequest(interrupt=True))
|
||||
|
||||
# CONTINUE the loop to receive remaining data and summary from the queue
|
||||
continue
|
||||
|
||||
if msg_type is None: # Sentinel
|
||||
break
|
||||
|
||||
if msg_type == "error":
|
||||
# Re-raise or handle gRPC error from background thread
|
||||
if isinstance(response, grpc.RpcError):
|
||||
raise response
|
||||
printer.warning(f"Stream interrupted: {response}")
|
||||
break
|
||||
|
||||
if response.status_update:
|
||||
if response.requires_confirmation:
|
||||
if status: status.stop()
|
||||
|
||||
# Show prompt and wait for answer
|
||||
prompt_text = Text.from_ansi(response.status_update)
|
||||
ans = Prompt.ask(prompt_text)
|
||||
|
||||
if status:
|
||||
status.update("[ai_status]Agent: Resuming...")
|
||||
status.start()
|
||||
|
||||
req_queue.put(connpy_pb2.AskRequest(confirmation_answer=ans))
|
||||
continue
|
||||
|
||||
if status:
|
||||
status.update(response.status_update)
|
||||
continue
|
||||
|
||||
if response.debug_message:
|
||||
if debug:
|
||||
if live_display:
|
||||
try: live_display.stop()
|
||||
except: pass
|
||||
if status:
|
||||
try: status.stop()
|
||||
except: pass
|
||||
printer.console.print(Text.from_ansi(response.debug_message))
|
||||
if live_display:
|
||||
try: live_display.start()
|
||||
except: pass
|
||||
elif status:
|
||||
try: status.start()
|
||||
except: pass
|
||||
continue
|
||||
|
||||
if response.important_message:
|
||||
if live_display:
|
||||
try: live_display.stop()
|
||||
except: pass
|
||||
if status:
|
||||
try: status.stop()
|
||||
except: pass
|
||||
printer.console.print(Text.from_ansi(response.important_message))
|
||||
if live_display:
|
||||
try: live_display.start()
|
||||
except: pass
|
||||
elif status:
|
||||
try: status.start()
|
||||
except: pass
|
||||
continue
|
||||
|
||||
if not response.is_final:
|
||||
if response.text_chunk:
|
||||
full_content += response.text_chunk
|
||||
|
||||
if not live_display:
|
||||
if status:
|
||||
try: status.stop()
|
||||
except: pass
|
||||
|
||||
from rich.console import Console as RichConsole
|
||||
from ..printer import connpy_theme, get_original_stdout
|
||||
stable_console = RichConsole(theme=connpy_theme, file=get_original_stdout())
|
||||
|
||||
# We default to Engineer title during stream, final result will correct it if needed
|
||||
live_display = Live(
|
||||
Panel(Markdown(full_content), title="[bold engineer]Network Engineer[/bold engineer]", border_style="engineer", expand=False),
|
||||
console=stable_console,
|
||||
refresh_per_second=8,
|
||||
transient=False
|
||||
)
|
||||
live_display.start()
|
||||
else:
|
||||
live_display.update(
|
||||
Panel(Markdown(full_content), title="[bold engineer]Network Engineer[/bold engineer]", border_style="engineer", expand=False)
|
||||
)
|
||||
continue
|
||||
|
||||
if response.is_final:
|
||||
if live_display:
|
||||
try: live_display.stop()
|
||||
except: pass
|
||||
# Final stop for status to ensure it disappears before the panel
|
||||
if status:
|
||||
try: status.stop()
|
||||
except: pass
|
||||
|
||||
final_result = from_struct(response.full_result)
|
||||
responder = final_result.get("responder", "engineer")
|
||||
alias = "architect" if responder == "architect" else "engineer"
|
||||
role_label = "Network Architect" if responder == "architect" else "Network Engineer"
|
||||
title = f"[bold {alias}]{role_label}[/bold {alias}]"
|
||||
|
||||
content_to_print = full_content or final_result.get("response", "")
|
||||
if content_to_print:
|
||||
if live_display:
|
||||
# Re-render the final frame with correct title/colors
|
||||
live_display.update(Panel(Markdown(content_to_print), title=title, border_style=alias, expand=False))
|
||||
else:
|
||||
printer.console.print(Panel(Markdown(content_to_print), title=title, border_style=alias, expand=False))
|
||||
break
|
||||
except Exception as e:
|
||||
# Check if it was a gRPC error that we should let handle_errors catch
|
||||
if isinstance(e, grpc.RpcError):
|
||||
raise
|
||||
printer.warning(f"Stream interrupted: {e}")
|
||||
finally:
|
||||
req_queue.put(None)
|
||||
|
||||
if full_content:
|
||||
final_result["streamed"] = True
|
||||
|
||||
return final_result
|
||||
|
||||
@handle_errors
|
||||
def confirm(self, input_text, console=None):
|
||||
return self.stub.confirm(connpy_pb2.StringRequest(value=input_text)).value
|
||||
|
||||
@handle_errors
|
||||
def list_sessions(self):
|
||||
return from_value(self.stub.list_sessions(Empty()).data)
|
||||
|
||||
@handle_errors
|
||||
def delete_session(self, session_id):
|
||||
self.stub.delete_session(connpy_pb2.StringRequest(value=session_id))
|
||||
|
||||
@handle_errors
|
||||
def configure_provider(self, provider, model=None, api_key=None):
|
||||
req = connpy_pb2.ProviderRequest(provider=provider, model=model or "", api_key=api_key or "")
|
||||
self.stub.configure_provider(req)
|
||||
|
||||
@handle_errors
|
||||
def load_session_data(self, session_id):
|
||||
return from_struct(self.stub.load_session_data(connpy_pb2.StringRequest(value=session_id)).data)
|
||||
|
||||
class SystemStub:
|
||||
def __init__(self, channel, remote_host):
|
||||
self.stub = connpy_pb2_grpc.SystemServiceStub(channel)
|
||||
self.remote_host = remote_host
|
||||
|
||||
@handle_errors
|
||||
def start_api(self, port=None):
|
||||
self.stub.start_api(connpy_pb2.IntRequest(value=port or 8048))
|
||||
|
||||
@handle_errors
|
||||
def debug_api(self, port=None):
|
||||
self.stub.debug_api(connpy_pb2.IntRequest(value=port or 8048))
|
||||
|
||||
@handle_errors
|
||||
def stop_api(self):
|
||||
self.stub.stop_api(Empty())
|
||||
|
||||
@handle_errors
|
||||
def restart_api(self, port=None):
|
||||
self.stub.restart_api(connpy_pb2.IntRequest(value=port or 8048))
|
||||
|
||||
@handle_errors
|
||||
def get_api_status(self):
|
||||
return self.stub.get_api_status(Empty()).value
|
||||
@@ -1,30 +0,0 @@
|
||||
import json
|
||||
from google.protobuf import json_format
|
||||
from google.protobuf.struct_pb2 import Struct, Value
|
||||
|
||||
def to_value(obj):
|
||||
if obj is None:
|
||||
v = Value()
|
||||
v.null_value = 0
|
||||
return v
|
||||
json_str = json.dumps(obj)
|
||||
v = Value()
|
||||
json_format.Parse(json_str, v)
|
||||
return v
|
||||
|
||||
def from_value(val):
|
||||
if not val.HasField("kind"):
|
||||
return None
|
||||
return json.loads(json_format.MessageToJson(val))
|
||||
|
||||
def to_struct(obj):
|
||||
if not obj:
|
||||
return Struct()
|
||||
s = Struct()
|
||||
json_format.ParseDict(obj, s)
|
||||
return s
|
||||
|
||||
def from_struct(struct):
|
||||
if not struct:
|
||||
return {}
|
||||
return json_format.MessageToDict(struct, preserving_proto_field_name=True)
|
||||
+10
-14
@@ -1,7 +1,6 @@
|
||||
#!/usr/bin/env python3
|
||||
#Imports
|
||||
from functools import wraps, partial, update_wrapper
|
||||
from . import printer
|
||||
|
||||
#functions and classes
|
||||
|
||||
@@ -20,21 +19,18 @@ class MethodHook:
|
||||
try:
|
||||
args, kwargs = hook(*args, **kwargs)
|
||||
except Exception as e:
|
||||
hook_name = getattr(hook, "__name__", str(hook))
|
||||
printer.error(f"{self.func.__name__} Pre-hook {hook_name} raised an exception: {e}")
|
||||
print(f"{self.func.__name__} Pre-hook {hook.__name__} raised an exception: {e}")
|
||||
|
||||
result = self.func(*args, **kwargs)
|
||||
try:
|
||||
result = self.func(*args, **kwargs)
|
||||
|
||||
# Execute post-hooks after the original function
|
||||
if self.post_hooks:
|
||||
#printer.info(f"Executing {len(self.post_hooks)} post-hooks for {self.func.__name__}...")
|
||||
pass
|
||||
for hook in self.post_hooks:
|
||||
try:
|
||||
result = hook(*args, **kwargs, result=result) # Pass result to hooks
|
||||
except Exception as e:
|
||||
hook_name = getattr(hook, "__name__", str(hook))
|
||||
printer.error(f"{self.func.__name__} Post-hook {hook_name} raised an exception: {e}")
|
||||
finally:
|
||||
# Execute post-hooks after the original function
|
||||
for hook in self.post_hooks:
|
||||
try:
|
||||
result = hook(*args, **kwargs, result=result) # Pass result to hooks
|
||||
except Exception as e:
|
||||
print(f"{self.func.__name__} Post-hook {hook.__name__} raised an exception: {e}")
|
||||
|
||||
return result
|
||||
|
||||
|
||||
+10
-131
@@ -4,34 +4,12 @@ import importlib.util
|
||||
import sys
|
||||
import argparse
|
||||
import os
|
||||
from connpy import printer
|
||||
|
||||
class Plugins:
|
||||
def __init__(self):
|
||||
self.plugins = {}
|
||||
self.plugin_parsers = {}
|
||||
self.preloads = {}
|
||||
self.remote_plugins = {}
|
||||
self.preferences = {}
|
||||
|
||||
def _load_preferences(self, config_dir):
|
||||
import json
|
||||
path = os.path.join(config_dir, "plugin_preferences.json")
|
||||
try:
|
||||
with open(path) as f:
|
||||
self.preferences = json.load(f)
|
||||
except (FileNotFoundError, json.JSONDecodeError):
|
||||
self.preferences = {}
|
||||
|
||||
def _save_preferences(self, config_dir):
|
||||
import json
|
||||
path = os.path.join(config_dir, "plugin_preferences.json")
|
||||
try:
|
||||
with open(path, "w") as f:
|
||||
json.dump(self.preferences, f, indent=4)
|
||||
except OSError as e:
|
||||
printer.error(f"Failed to save plugin preferences: {e}")
|
||||
|
||||
|
||||
def verify_script(self, file_path):
|
||||
"""
|
||||
@@ -52,7 +30,8 @@ class Plugins:
|
||||
### Verifications:
|
||||
- The presence of only allowed top-level elements.
|
||||
- The existence of two specific classes: 'Parser' and 'Entrypoint'. and/or specific class: Preload.
|
||||
- 'Parser' class must only have an '__init__' method and must assign 'self.parser'.
|
||||
- 'Parser' class must only have an '__init__' method and must assign 'self.parser'
|
||||
and 'self.description'.
|
||||
- 'Entrypoint' class must have an '__init__' method accepting specific arguments.
|
||||
|
||||
If any of these checks fail, the function returns an error message indicating
|
||||
@@ -83,8 +62,8 @@ class Plugins:
|
||||
if not (isinstance(node.test, ast.Compare) and
|
||||
isinstance(node.test.left, ast.Name) and
|
||||
node.test.left.id == '__name__' and
|
||||
((hasattr(ast, 'Str') and isinstance(node.test.comparators[0], getattr(ast, 'Str')) and node.test.comparators[0].s == '__main__') or
|
||||
(hasattr(ast, 'Constant') and isinstance(node.test.comparators[0], getattr(ast, 'Constant')) and node.test.comparators[0].value == '__main__'))):
|
||||
isinstance(node.test.comparators[0], ast.Str) and
|
||||
node.test.comparators[0].s == '__main__'):
|
||||
return "Only __name__ == __main__ If is allowed"
|
||||
|
||||
elif not isinstance(node, (ast.FunctionDef, ast.ClassDef, ast.Import, ast.ImportFrom, ast.Pass)):
|
||||
@@ -98,12 +77,11 @@ class Plugins:
|
||||
if not all(isinstance(method, ast.FunctionDef) and method.name == '__init__' for method in node.body):
|
||||
return "Parser class should only have __init__ method"
|
||||
|
||||
# Check if 'self.parser' is assigned in __init__ method
|
||||
# Check if 'self.parser' and 'self.description' are assigned in __init__ method
|
||||
init_method = node.body[0]
|
||||
assigned_attrs = [target.attr for expr in init_method.body if isinstance(expr, ast.Assign) for target in expr.targets if isinstance(target, ast.Attribute) and isinstance(target.value, ast.Name) and target.value.id == 'self']
|
||||
if 'parser' not in assigned_attrs:
|
||||
return "Parser class should set self.parser"
|
||||
|
||||
if 'parser' not in assigned_attrs or 'description' not in assigned_attrs:
|
||||
return "Parser class should set self.parser and self.description" # 'self.parser' or 'self.description' not assigned in __init__
|
||||
|
||||
elif node.name == 'Entrypoint':
|
||||
has_entrypoint = True
|
||||
@@ -135,123 +113,24 @@ class Plugins:
|
||||
spec.loader.exec_module(module)
|
||||
return module
|
||||
|
||||
def _import_plugins_to_argparse(self, directory, subparsers, remote_enabled=False):
|
||||
if not os.path.exists(directory):
|
||||
return
|
||||
def _import_plugins_to_argparse(self, directory, subparsers):
|
||||
for filename in os.listdir(directory):
|
||||
commands = subparsers.choices.keys()
|
||||
if filename.endswith(".py"):
|
||||
root_filename = os.path.splitext(filename)[0]
|
||||
if root_filename in commands:
|
||||
continue
|
||||
|
||||
# Check preferences: if remote is preferred AND remote is enabled, skip local loading
|
||||
if remote_enabled and self.preferences.get(root_filename) == "remote":
|
||||
continue
|
||||
|
||||
# Construct the full path
|
||||
filepath = os.path.join(directory, filename)
|
||||
check_file = self.verify_script(filepath)
|
||||
if check_file:
|
||||
printer.error(f"Failed to load plugin: {filename}. Reason: {check_file}")
|
||||
print(f"Failed to load plugin: {filename}. Reason: {check_file}")
|
||||
continue
|
||||
else:
|
||||
self.plugins[root_filename] = self._import_from_path(filepath)
|
||||
if hasattr(self.plugins[root_filename], "Parser"):
|
||||
self.plugin_parsers[root_filename] = self.plugins[root_filename].Parser()
|
||||
plugin = self.plugin_parsers[root_filename]
|
||||
# Default to RichHelpFormatter if plugin doesn't set one
|
||||
try:
|
||||
from rich_argparse import RichHelpFormatter as _RHF
|
||||
fmt = plugin.parser.formatter_class
|
||||
if fmt is argparse.HelpFormatter or fmt is argparse.RawTextHelpFormatter or fmt is argparse.RawDescriptionHelpFormatter:
|
||||
fmt = _RHF
|
||||
except ImportError:
|
||||
fmt = plugin.parser.formatter_class
|
||||
subparsers.add_parser(root_filename, parents=[self.plugin_parsers[root_filename].parser], add_help=False, help=plugin.parser.description, usage=plugin.parser.usage, description=plugin.parser.description, epilog=plugin.parser.epilog, formatter_class=fmt)
|
||||
subparsers.add_parser(root_filename, parents=[self.plugin_parsers[root_filename].parser], add_help=False, description=self.plugin_parsers[root_filename].description)
|
||||
if hasattr(self.plugins[root_filename], "Preload"):
|
||||
self.preloads[root_filename] = self.plugins[root_filename]
|
||||
|
||||
def _import_remote_plugins_to_argparse(self, plugin_stub, subparsers, cache_dir, force_sync=False):
|
||||
import hashlib
|
||||
os.makedirs(cache_dir, exist_ok=True)
|
||||
|
||||
try:
|
||||
remote_plugins_info = plugin_stub.list_plugins()
|
||||
except Exception:
|
||||
return
|
||||
|
||||
# Pruning: Remove local cached files that are no longer on the server
|
||||
for local_file in os.listdir(cache_dir):
|
||||
if local_file.endswith(".py"):
|
||||
name = local_file[:-3]
|
||||
if name not in remote_plugins_info:
|
||||
try:
|
||||
os.remove(os.path.join(cache_dir, local_file))
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
for name, info in remote_plugins_info.items():
|
||||
if not info.get("enabled", True):
|
||||
continue
|
||||
|
||||
pref = self.preferences.get(name, "local")
|
||||
if pref != "remote" and name in self.plugins:
|
||||
continue
|
||||
if not force_sync and name in subparsers.choices:
|
||||
continue
|
||||
|
||||
cache_path = os.path.join(cache_dir, f"{name}.py")
|
||||
|
||||
# Hash comparison
|
||||
remote_hash = info.get("hash", "")
|
||||
local_hash = ""
|
||||
if os.path.exists(cache_path):
|
||||
try:
|
||||
with open(cache_path, "rb") as f:
|
||||
local_hash = hashlib.md5(f.read()).hexdigest()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Update only if hash differs or force_sync is True
|
||||
if force_sync or remote_hash != local_hash or not os.path.exists(cache_path):
|
||||
try:
|
||||
source = plugin_stub.get_plugin_source(name)
|
||||
with open(cache_path, "w") as f:
|
||||
f.write(source)
|
||||
except Exception as e:
|
||||
printer.warning(f"Failed to sync remote plugin {name}: {e}")
|
||||
continue
|
||||
|
||||
# Verify and load
|
||||
check_file = self.verify_script(cache_path)
|
||||
if check_file:
|
||||
printer.warning(f"Remote plugin {name} failed verification: {check_file}")
|
||||
continue
|
||||
|
||||
module = self._import_from_path(cache_path)
|
||||
if hasattr(module, "Parser"):
|
||||
self.plugin_parsers[name] = module.Parser()
|
||||
self.remote_plugins[name] = True
|
||||
plugin = self.plugin_parsers[name]
|
||||
try:
|
||||
from rich_argparse import RichHelpFormatter as _RHF
|
||||
fmt = plugin.parser.formatter_class
|
||||
if fmt is argparse.HelpFormatter or fmt is argparse.RawTextHelpFormatter or fmt is argparse.RawDescriptionHelpFormatter:
|
||||
fmt = _RHF
|
||||
except ImportError:
|
||||
fmt = plugin.parser.formatter_class
|
||||
|
||||
# If force_sync, we might be re-registering, but argparse subparsers.add_parser
|
||||
# might fail if it exists. We check if it's already there.
|
||||
if name not in subparsers.choices:
|
||||
subparsers.add_parser(
|
||||
name,
|
||||
parents=[plugin.parser],
|
||||
add_help=False,
|
||||
help=f"[remote] {plugin.parser.description}",
|
||||
usage=plugin.parser.usage,
|
||||
description=plugin.parser.description,
|
||||
epilog=plugin.parser.epilog,
|
||||
formatter_class=fmt
|
||||
)
|
||||
|
||||
@@ -1,468 +0,0 @@
|
||||
import sys
|
||||
import threading
|
||||
import io
|
||||
|
||||
_local = threading.local()
|
||||
|
||||
class ThreadLocalStream:
|
||||
def __init__(self, original):
|
||||
self._original = original
|
||||
|
||||
def _get_stream(self):
|
||||
s = getattr(_local, 'stream', None)
|
||||
return s if s is not None else self._original
|
||||
|
||||
def write(self, data):
|
||||
stream = self._get_stream()
|
||||
if stream:
|
||||
stream.write(data)
|
||||
|
||||
def flush(self):
|
||||
stream = self._get_stream()
|
||||
if stream:
|
||||
stream.flush()
|
||||
|
||||
def isatty(self):
|
||||
stream = self._get_stream()
|
||||
return stream.isatty() if stream else False
|
||||
|
||||
def __getattr__(self, name):
|
||||
# Avoid recursion during initialization or if _original is not yet set
|
||||
if name in ('_original', '_get_stream'):
|
||||
raise AttributeError(name)
|
||||
stream = self._get_stream()
|
||||
if stream:
|
||||
return getattr(stream, name)
|
||||
raise AttributeError(f"'NoneType' object has no attribute '{name}'")
|
||||
|
||||
# Patch stdout/stderr only once at module level
|
||||
if not isinstance(sys.stdout, ThreadLocalStream):
|
||||
sys.stdout = ThreadLocalStream(sys.stdout)
|
||||
if not isinstance(sys.stderr, ThreadLocalStream):
|
||||
sys.stderr = ThreadLocalStream(sys.stderr)
|
||||
|
||||
def _get_local():
|
||||
if not hasattr(_local, 'console'):
|
||||
_local.console = None
|
||||
if not hasattr(_local, 'err_console'):
|
||||
_local.err_console = None
|
||||
if not hasattr(_local, 'theme'):
|
||||
_local.theme = None
|
||||
return _local
|
||||
|
||||
def set_thread_stream(stream):
|
||||
if stream is None:
|
||||
if hasattr(_local, 'stream'):
|
||||
del _local.stream
|
||||
else:
|
||||
_local.stream = stream
|
||||
|
||||
def get_original_stdout():
|
||||
if isinstance(sys.stdout, ThreadLocalStream):
|
||||
return sys.stdout._original
|
||||
return sys.stdout
|
||||
|
||||
def get_original_stderr():
|
||||
if isinstance(sys.stderr, ThreadLocalStream):
|
||||
return sys.stderr._original
|
||||
return sys.stderr
|
||||
|
||||
# Centralized design system
|
||||
STYLES = {
|
||||
"info": "cyan",
|
||||
"warning": "yellow",
|
||||
"error": "red",
|
||||
"success": "green",
|
||||
"debug": "dim",
|
||||
"header": "bold cyan",
|
||||
"key": "bold cyan",
|
||||
"border": "cyan",
|
||||
"pass": "bold green",
|
||||
"fail": "bold red",
|
||||
"engineer": "blue",
|
||||
"architect": "medium_purple",
|
||||
"ai_status": "bold green",
|
||||
"user_prompt": "bold cyan",
|
||||
"unavailable": "orange3",
|
||||
}
|
||||
|
||||
def _get_console():
|
||||
local = _get_local()
|
||||
|
||||
# Self-healing patch: if sys.stdout was replaced (e.g. by pytest), re-wrap it.
|
||||
if not isinstance(sys.stdout, ThreadLocalStream):
|
||||
sys.stdout = ThreadLocalStream(sys.stdout)
|
||||
|
||||
current_out = sys.stdout
|
||||
|
||||
# Detect if we need to recreate the console (stream changed or closed)
|
||||
needs_recreate = (local.console is None or
|
||||
getattr(local, '_last_stdout', None) is not current_out)
|
||||
|
||||
# Extra check for closed files in test environments
|
||||
if not needs_recreate and local.console is not None:
|
||||
try:
|
||||
if hasattr(local.console.file, 'closed') and local.console.file.closed:
|
||||
needs_recreate = True
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
if needs_recreate:
|
||||
from rich.console import Console
|
||||
from rich.theme import Theme
|
||||
if local.theme is None:
|
||||
local.theme = Theme(STYLES)
|
||||
local.console = Console(theme=local.theme, file=current_out)
|
||||
local._last_stdout = current_out
|
||||
|
||||
return local.console
|
||||
|
||||
def _get_err_console():
|
||||
local = _get_local()
|
||||
|
||||
# Self-healing patch for stderr
|
||||
if not isinstance(sys.stderr, ThreadLocalStream):
|
||||
sys.stderr = ThreadLocalStream(sys.stderr)
|
||||
|
||||
current_err = sys.stderr
|
||||
|
||||
needs_recreate = (local.err_console is None or
|
||||
getattr(local, '_last_stderr', None) is not current_err)
|
||||
|
||||
if not needs_recreate and local.err_console is not None:
|
||||
try:
|
||||
if hasattr(local.err_console.file, 'closed') and local.err_console.file.closed:
|
||||
needs_recreate = True
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
if needs_recreate:
|
||||
from rich.console import Console
|
||||
from rich.theme import Theme
|
||||
if local.theme is None:
|
||||
local.theme = Theme(STYLES)
|
||||
local.err_console = Console(stderr=True, theme=local.theme, file=current_err)
|
||||
local._last_stderr = current_err
|
||||
|
||||
return local.err_console
|
||||
|
||||
def set_thread_console(console):
|
||||
_get_local().console = console
|
||||
|
||||
def set_thread_err_console(console):
|
||||
_get_local().err_console = console
|
||||
|
||||
def clear_thread_state():
|
||||
"""Removes all thread-local printer state. Useful for gRPC thread reuse."""
|
||||
for attr in ["stream", "console", "err_console", "theme", "_last_stdout", "_last_stderr"]:
|
||||
if hasattr(_local, attr):
|
||||
delattr(_local, attr)
|
||||
|
||||
@property
|
||||
def console():
|
||||
return _get_console()
|
||||
|
||||
@property
|
||||
def err_console():
|
||||
return _get_err_console()
|
||||
|
||||
@property
|
||||
def connpy_theme():
|
||||
local = _get_local()
|
||||
if local.theme is None:
|
||||
from rich.theme import Theme
|
||||
local.theme = Theme(STYLES)
|
||||
return local.theme
|
||||
|
||||
def apply_theme(user_styles=None):
|
||||
"""
|
||||
Updates the global console themes with user-defined styles.
|
||||
If a style is missing in user_styles, it falls back to the default in STYLES.
|
||||
"""
|
||||
local = _get_local()
|
||||
from rich.theme import Theme
|
||||
|
||||
# Start with a copy of defaults
|
||||
active_styles = STYLES.copy()
|
||||
if user_styles:
|
||||
# Merge user styles (only if they are valid keys)
|
||||
for key, value in user_styles.items():
|
||||
if key in active_styles:
|
||||
active_styles[key] = value
|
||||
|
||||
local.theme = Theme(active_styles)
|
||||
if local.console:
|
||||
local.console.push_theme(local.theme)
|
||||
if local.err_console:
|
||||
local.err_console.push_theme(local.theme)
|
||||
return active_styles
|
||||
|
||||
|
||||
def _format_multiline(tag, message, style=None):
|
||||
message = str(message)
|
||||
lines = message.splitlines()
|
||||
if not lines:
|
||||
return f"[{style}]\\[{tag}][/{style}]" if style else f"\\[{tag}]"
|
||||
|
||||
# Apply style to the tag if provided
|
||||
styled_tag = f"[{style}]\\[{tag}][/{style}]" if style else f"\\[{tag}]"
|
||||
formatted = [f"{styled_tag} {lines[0]}"]
|
||||
|
||||
# Indent subsequent lines
|
||||
indent = " " * (len(tag) + 3)
|
||||
for line in lines[1:]:
|
||||
formatted.append(f"{indent}{line}")
|
||||
return "\n".join(formatted)
|
||||
|
||||
def info(message):
|
||||
_get_console().print(_format_multiline("i", message, style="info"))
|
||||
|
||||
def success(message):
|
||||
_get_console().print(_format_multiline("✓", message, style="success"))
|
||||
|
||||
def start(message):
|
||||
_get_console().print(_format_multiline("+", message, style="success"))
|
||||
|
||||
def warning(message):
|
||||
_get_console().print(_format_multiline("!", message, style="warning"))
|
||||
|
||||
def error(message):
|
||||
_get_err_console().print(_format_multiline("✗", message, style="error"))
|
||||
|
||||
def debug(message):
|
||||
_get_console().print(_format_multiline("d", message, style="debug"))
|
||||
|
||||
def custom(tag, message):
|
||||
_get_console().print(_format_multiline(tag, message, style="header"))
|
||||
|
||||
def table(title, columns, rows, header_style="header", box=None):
|
||||
from rich.table import Table
|
||||
t = Table(title=title, header_style=header_style, box=box)
|
||||
for col in columns:
|
||||
t.add_column(col)
|
||||
for row in rows:
|
||||
t.add_row(*[str(item) for item in row])
|
||||
_get_console().print(t)
|
||||
|
||||
def data(title, content, language="yaml"):
|
||||
"""Display structured data with syntax highlighting inside a panel."""
|
||||
from rich.syntax import Syntax
|
||||
from rich.panel import Panel
|
||||
syntax = Syntax(content, language, theme="ansi_dark", word_wrap=True, background_color="default")
|
||||
panel = Panel(syntax, title=f"[header]{title}[/header]", border_style="border", expand=False)
|
||||
_get_console().print(panel)
|
||||
|
||||
def node_panel(unique, output, status, title_prefix=""):
|
||||
"""Display node execution result in a styled panel."""
|
||||
from rich.panel import Panel
|
||||
from rich.text import Text
|
||||
from rich.console import Group
|
||||
import os
|
||||
|
||||
try:
|
||||
cols, _ = os.get_terminal_size()
|
||||
except OSError:
|
||||
cols = 80
|
||||
|
||||
if status == 0:
|
||||
status_str = "[pass]✓ PASS[/pass]"
|
||||
border = "pass"
|
||||
else:
|
||||
status_str = f"[fail]✗ FAIL({status})[/fail]"
|
||||
border = "fail"
|
||||
|
||||
title_line = f"{title_prefix}[bold]{unique}[/bold] — {status_str}"
|
||||
stripped = output.strip() if output else ""
|
||||
code_block = Text(stripped + "\n") if stripped else Text()
|
||||
|
||||
_get_console().print(Panel(Group(Text(), code_block), title=title_line, width=cols, border_style=border))
|
||||
|
||||
def test_panel(unique, output, status, result):
|
||||
"""Display test execution result in a styled panel."""
|
||||
from rich.panel import Panel
|
||||
from rich.text import Text
|
||||
from rich.console import Group
|
||||
import os
|
||||
|
||||
try:
|
||||
cols, _ = os.get_terminal_size()
|
||||
except OSError:
|
||||
cols = 80
|
||||
|
||||
is_pass = (status == 0 and result and all(result.values()))
|
||||
|
||||
if is_pass:
|
||||
status_str = "[pass]✓ PASS[/pass]"
|
||||
border = "pass"
|
||||
else:
|
||||
status_str = f"[fail]✗ FAIL[/fail]"
|
||||
border = "fail"
|
||||
|
||||
title_line = f"[bold]{unique}[/bold] — {status_str}"
|
||||
|
||||
stripped = output.strip() if output else ""
|
||||
code_block = Text(stripped + "\n") if stripped else Text()
|
||||
|
||||
test_results = Text()
|
||||
test_results.append("\nTEST RESULTS:\n", style="header")
|
||||
if result:
|
||||
max_key_len = max(len(k) for k in result.keys())
|
||||
for k, v in result.items():
|
||||
mark = "✓" if v else "✗"
|
||||
style = "success" if v else "error"
|
||||
test_results.append(f" {k.ljust(max_key_len)} {mark}\n", style=style)
|
||||
else:
|
||||
test_results.append(" No results (execution failed)\n", style="error")
|
||||
|
||||
_get_console().print(Panel(Group(Text(), code_block, test_results), title=title_line, width=cols, border_style=border))
|
||||
|
||||
def test_summary(results):
|
||||
"""Print an aggregate summary of multiple test results in a single panel."""
|
||||
from rich.panel import Panel
|
||||
from rich.text import Text
|
||||
from rich.console import Group
|
||||
import os
|
||||
|
||||
try:
|
||||
cols, _ = os.get_terminal_size()
|
||||
except OSError:
|
||||
cols = 80
|
||||
|
||||
summary_content = Text()
|
||||
total_passed = 0
|
||||
total_failed = 0
|
||||
total_partial = 0
|
||||
|
||||
if not results:
|
||||
summary_content.append(" No test results found.\n", style="error")
|
||||
else:
|
||||
for node, test_result in results.items():
|
||||
summary_content.append(f"• ", style="border")
|
||||
summary_content.append(f"{node.ljust(40)}", style="bold")
|
||||
|
||||
if test_result:
|
||||
passed_count = sum(1 for v in test_result.values() if v)
|
||||
total_count = len(test_result)
|
||||
|
||||
if passed_count == total_count:
|
||||
total_passed += 1
|
||||
node_style = "success"
|
||||
mark = "✓ PASS"
|
||||
elif passed_count > 0:
|
||||
total_partial += 1
|
||||
node_style = "warning"
|
||||
mark = f"⚠ PARTIAL ({passed_count}/{total_count})"
|
||||
else:
|
||||
total_failed += 1
|
||||
node_style = "error"
|
||||
mark = "✗ FAIL"
|
||||
|
||||
summary_content.append(f" {mark}\n", style=node_style)
|
||||
for k, v in test_result.items():
|
||||
res_mark = "✓" if v else "✗"
|
||||
res_style = "success" if v else "error"
|
||||
summary_content.append(f" {k.ljust(38)} {res_mark}\n", style=res_style)
|
||||
else:
|
||||
total_failed += 1
|
||||
summary_content.append(" ✗ FAIL\n", style="error")
|
||||
summary_content.append(" No results (execution failed)\n", style="error")
|
||||
|
||||
status_parts = []
|
||||
if total_passed: status_parts.append(f"[pass]{total_passed} PASSED[/pass]")
|
||||
if total_partial: status_parts.append(f"[warning]{total_partial} PARTIAL[/warning]")
|
||||
if total_failed: status_parts.append(f"[fail]{total_failed} FAILED[/fail]")
|
||||
|
||||
status_str = " | ".join(status_parts) if status_parts else "[error]NO RESULTS[/error]"
|
||||
title_line = f"AGGREGATE TEST SUMMARY — {status_str}"
|
||||
|
||||
_get_console().print(Panel(Group(Text(), summary_content), title=title_line, width=cols, border_style="border"))
|
||||
|
||||
def run_summary(results):
|
||||
"""Print an aggregate summary of multiple execution results in a single panel."""
|
||||
from rich.panel import Panel
|
||||
from rich.text import Text
|
||||
from rich.console import Group
|
||||
import os
|
||||
|
||||
try:
|
||||
cols, _ = os.get_terminal_size()
|
||||
except OSError:
|
||||
cols = 80
|
||||
|
||||
summary_content = Text()
|
||||
total_ok = 0
|
||||
total_err = 0
|
||||
|
||||
if not results:
|
||||
summary_content.append(" No execution results found.\n", style="error")
|
||||
else:
|
||||
for node, data in results.items():
|
||||
summary_content.append(f"• ", style="border")
|
||||
summary_content.append(f"{node.ljust(40)}", style="bold")
|
||||
|
||||
# Check if we have a status dict or just output (for backward compatibility)
|
||||
status = data.get("status", 0) if isinstance(data, dict) else 0
|
||||
|
||||
if status == 0:
|
||||
total_ok += 1
|
||||
summary_content.append(f" ✓ DONE\n", style="success")
|
||||
else:
|
||||
total_err += 1
|
||||
summary_content.append(f" ✗ FAIL({status})\n", style="error")
|
||||
|
||||
status_parts = []
|
||||
if total_ok: status_parts.append(f"[success]{total_ok} DONE[/success]")
|
||||
if total_err: status_parts.append(f"[error]{total_err} FAILED[/error]")
|
||||
|
||||
status_str = " | ".join(status_parts) if status_parts else "[error]NO RESULTS[/error]"
|
||||
title_line = f"AGGREGATE EXECUTION SUMMARY — {status_str}"
|
||||
|
||||
_get_console().print(Panel(Group(Text(), summary_content), title=title_line, width=cols, border_style="border"))
|
||||
|
||||
def header(text):
|
||||
"""Print a section header."""
|
||||
from rich.rule import Rule
|
||||
_get_console().print(Rule(text, style="header"))
|
||||
|
||||
def kv(key, value):
|
||||
"""Print an inline key-value pair."""
|
||||
_get_console().print(f"[key]{key}[/key]: {value}")
|
||||
|
||||
def confirm_action(item, action):
|
||||
"""Print a confirmation pre-action message."""
|
||||
_get_console().print(f"\\[i] [bold]{action}[/bold]: {item}", style="info")
|
||||
|
||||
# Compatibility proxies
|
||||
class _ConsoleProxy:
|
||||
def __getattr__(self, name):
|
||||
return getattr(_get_console(), name)
|
||||
def __call__(self, *args, **kwargs):
|
||||
return _get_console()(*args, **kwargs)
|
||||
def __enter__(self):
|
||||
return _get_console().__enter__()
|
||||
def __exit__(self, exc_type, exc_val, exc_tb):
|
||||
return _get_console().__exit__(exc_type, exc_val, exc_tb)
|
||||
|
||||
class _ErrConsoleProxy:
|
||||
def __getattr__(self, name):
|
||||
return getattr(_get_err_console(), name)
|
||||
def __call__(self, *args, **kwargs):
|
||||
return _get_err_console()(*args, **kwargs)
|
||||
def __enter__(self):
|
||||
return _get_err_console().__enter__()
|
||||
def __exit__(self, exc_type, exc_val, exc_tb):
|
||||
return _get_err_console().__exit__(exc_type, exc_val, exc_tb)
|
||||
|
||||
console = _ConsoleProxy()
|
||||
err_console = _ErrConsoleProxy()
|
||||
|
||||
# theme also needs to be lazy
|
||||
class _ThemeProxy:
|
||||
def __getattr__(self, name):
|
||||
local = _get_local()
|
||||
if local.theme is None:
|
||||
from rich.theme import Theme
|
||||
local.theme = Theme(STYLES)
|
||||
return getattr(local.theme, name)
|
||||
|
||||
connpy_theme = _ThemeProxy()
|
||||
@@ -1,259 +0,0 @@
|
||||
syntax = "proto3";
|
||||
|
||||
package connpy;
|
||||
|
||||
import "google/protobuf/struct.proto";
|
||||
import "google/protobuf/empty.proto";
|
||||
|
||||
service NodeService {
|
||||
rpc list_nodes (FilterRequest) returns (ValueResponse) {}
|
||||
rpc list_folders (FilterRequest) returns (ValueResponse) {}
|
||||
rpc get_node_details (IdRequest) returns (StructResponse) {}
|
||||
rpc explode_unique (IdRequest) returns (ValueResponse) {}
|
||||
rpc generate_cache (google.protobuf.Empty) returns (google.protobuf.Empty) {}
|
||||
rpc add_node (NodeRequest) returns (google.protobuf.Empty) {}
|
||||
rpc update_node (NodeRequest) returns (google.protobuf.Empty) {}
|
||||
rpc delete_node (DeleteRequest) returns (google.protobuf.Empty) {}
|
||||
rpc move_node (MoveRequest) returns (google.protobuf.Empty) {}
|
||||
rpc bulk_add (BulkRequest) returns (google.protobuf.Empty) {}
|
||||
rpc validate_parent_folder (IdRequest) returns (google.protobuf.Empty) {}
|
||||
rpc set_reserved_names (ListRequest) returns (google.protobuf.Empty) {}
|
||||
rpc interact_node (stream InteractRequest) returns (stream InteractResponse) {}
|
||||
rpc full_replace (FullReplaceRequest) returns (google.protobuf.Empty) {}
|
||||
rpc get_inventory (google.protobuf.Empty) returns (FullReplaceRequest) {}
|
||||
}
|
||||
|
||||
service ProfileService {
|
||||
rpc list_profiles (FilterRequest) returns (ValueResponse) {}
|
||||
rpc get_profile (ProfileRequest) returns (StructResponse) {}
|
||||
rpc add_profile (NodeRequest) returns (google.protobuf.Empty) {}
|
||||
rpc resolve_node_data (StructRequest) returns (StructResponse) {}
|
||||
rpc delete_profile (IdRequest) returns (google.protobuf.Empty) {}
|
||||
rpc update_profile (NodeRequest) returns (google.protobuf.Empty) {}
|
||||
}
|
||||
|
||||
service ConfigService {
|
||||
rpc get_settings (google.protobuf.Empty) returns (StructResponse) {}
|
||||
rpc get_default_dir (google.protobuf.Empty) returns (StringResponse) {}
|
||||
rpc set_config_folder (StringRequest) returns (google.protobuf.Empty) {}
|
||||
rpc update_setting (UpdateRequest) returns (google.protobuf.Empty) {}
|
||||
rpc encrypt_password (StringRequest) returns (StringResponse) {}
|
||||
rpc apply_theme_from_file (StringRequest) returns (StructResponse) {}
|
||||
}
|
||||
|
||||
service PluginService {
|
||||
rpc list_plugins (google.protobuf.Empty) returns (ValueResponse) {}
|
||||
rpc add_plugin (PluginRequest) returns (google.protobuf.Empty) {}
|
||||
rpc delete_plugin (IdRequest) returns (google.protobuf.Empty) {}
|
||||
rpc enable_plugin (IdRequest) returns (google.protobuf.Empty) {}
|
||||
rpc disable_plugin (IdRequest) returns (google.protobuf.Empty) {}
|
||||
}
|
||||
|
||||
service ExecutionService {
|
||||
rpc run_commands (RunRequest) returns (stream NodeRunResult) {}
|
||||
rpc test_commands (TestRequest) returns (stream NodeRunResult) {}
|
||||
rpc run_cli_script (ScriptRequest) returns (StructResponse) {}
|
||||
rpc run_yaml_playbook (ScriptRequest) returns (StructResponse) {}
|
||||
}
|
||||
|
||||
service ImportExportService {
|
||||
rpc export_to_file (ExportRequest) returns (google.protobuf.Empty) {}
|
||||
rpc import_from_file (StringRequest) returns (google.protobuf.Empty) {}
|
||||
rpc set_reserved_names (ListRequest) returns (google.protobuf.Empty) {}
|
||||
}
|
||||
|
||||
service AIService {
|
||||
rpc ask (stream AskRequest) returns (stream AIResponse) {}
|
||||
rpc confirm (StringRequest) returns (BoolResponse) {}
|
||||
rpc list_sessions (google.protobuf.Empty) returns (ValueResponse) {}
|
||||
rpc delete_session (StringRequest) returns (google.protobuf.Empty) {}
|
||||
rpc configure_provider (ProviderRequest) returns (google.protobuf.Empty) {}
|
||||
rpc load_session_data (StringRequest) returns (StructResponse) {}
|
||||
}
|
||||
|
||||
service SystemService {
|
||||
rpc start_api (IntRequest) returns (google.protobuf.Empty) {}
|
||||
rpc debug_api (IntRequest) returns (google.protobuf.Empty) {}
|
||||
rpc stop_api (google.protobuf.Empty) returns (google.protobuf.Empty) {}
|
||||
rpc restart_api (IntRequest) returns (google.protobuf.Empty) {}
|
||||
rpc get_api_status (google.protobuf.Empty) returns (BoolResponse) {}
|
||||
}
|
||||
|
||||
// Request and Response Messages
|
||||
|
||||
message InteractRequest {
|
||||
string id = 1;
|
||||
bool sftp = 2;
|
||||
bool debug = 3;
|
||||
bytes stdin_data = 4;
|
||||
int32 cols = 5;
|
||||
int32 rows = 6;
|
||||
string connection_params_json = 7;
|
||||
}
|
||||
|
||||
message InteractResponse {
|
||||
bytes stdout_data = 1;
|
||||
bool success = 2;
|
||||
string error_message = 3;
|
||||
}
|
||||
|
||||
message FilterRequest {
|
||||
string filter_str = 1;
|
||||
string format_str = 2;
|
||||
}
|
||||
|
||||
message ValueResponse {
|
||||
google.protobuf.Value data = 1;
|
||||
}
|
||||
|
||||
message IdRequest {
|
||||
string id = 1;
|
||||
}
|
||||
|
||||
message NodeRequest {
|
||||
string id = 1;
|
||||
google.protobuf.Struct data = 2;
|
||||
bool is_folder = 3;
|
||||
}
|
||||
|
||||
message DeleteRequest {
|
||||
string id = 1;
|
||||
bool is_folder = 2;
|
||||
}
|
||||
|
||||
message MessageValue {
|
||||
string value = 1;
|
||||
}
|
||||
|
||||
message MoveRequest {
|
||||
string src_id = 1;
|
||||
string dst_id = 2;
|
||||
bool copy = 3;
|
||||
}
|
||||
|
||||
message BulkRequest {
|
||||
repeated string ids = 1;
|
||||
repeated string hosts = 2;
|
||||
google.protobuf.Struct common_data = 3;
|
||||
}
|
||||
|
||||
message StructResponse {
|
||||
google.protobuf.Struct data = 1;
|
||||
}
|
||||
|
||||
message ProfileRequest {
|
||||
string name = 1;
|
||||
bool resolve = 2;
|
||||
}
|
||||
|
||||
message StructRequest {
|
||||
google.protobuf.Struct data = 1;
|
||||
}
|
||||
|
||||
message StringRequest {
|
||||
string value = 1;
|
||||
}
|
||||
|
||||
message StringResponse {
|
||||
string value = 1;
|
||||
}
|
||||
|
||||
message UpdateRequest {
|
||||
string key = 1;
|
||||
google.protobuf.Value value = 2;
|
||||
}
|
||||
|
||||
message PluginRequest {
|
||||
string name = 1;
|
||||
string source_file = 2;
|
||||
bool update = 3;
|
||||
}
|
||||
|
||||
message RunRequest {
|
||||
repeated string nodes = 1;
|
||||
repeated string commands = 2;
|
||||
string folder = 3;
|
||||
string prompt = 4;
|
||||
int32 parallel = 5;
|
||||
google.protobuf.Struct vars = 6;
|
||||
int32 timeout = 7;
|
||||
string name = 8;
|
||||
}
|
||||
|
||||
message TestRequest {
|
||||
repeated string nodes = 1;
|
||||
repeated string commands = 2;
|
||||
repeated string expected = 3;
|
||||
string folder = 4;
|
||||
string prompt = 5;
|
||||
int32 parallel = 6;
|
||||
google.protobuf.Struct vars = 7;
|
||||
int32 timeout = 8;
|
||||
string name = 9;
|
||||
}
|
||||
|
||||
message ScriptRequest {
|
||||
string param1 = 1; // nodes_filter or playbook_path
|
||||
string param2 = 2; // script_path or ""
|
||||
int32 parallel = 3;
|
||||
}
|
||||
|
||||
message ExportRequest {
|
||||
string file_path = 1;
|
||||
repeated string folders = 2;
|
||||
}
|
||||
|
||||
message ListRequest {
|
||||
repeated string items = 1;
|
||||
}
|
||||
|
||||
message AskRequest {
|
||||
string input_text = 1;
|
||||
bool dryrun = 2;
|
||||
google.protobuf.Value chat_history = 3;
|
||||
string session_id = 4;
|
||||
bool debug = 5;
|
||||
string engineer_model = 6;
|
||||
string engineer_api_key = 7;
|
||||
string architect_model = 8;
|
||||
string architect_api_key = 9;
|
||||
bool trust = 10;
|
||||
string confirmation_answer = 11;
|
||||
bool interrupt = 12;
|
||||
}
|
||||
|
||||
message AIResponse {
|
||||
string text_chunk = 1;
|
||||
bool is_final = 2;
|
||||
google.protobuf.Struct full_result = 3;
|
||||
string status_update = 4;
|
||||
string debug_message = 5;
|
||||
bool requires_confirmation = 6;
|
||||
string important_message = 7;
|
||||
}
|
||||
|
||||
message BoolResponse {
|
||||
bool value = 1;
|
||||
}
|
||||
|
||||
message ProviderRequest {
|
||||
string provider = 1;
|
||||
string model = 2;
|
||||
string api_key = 3;
|
||||
}
|
||||
|
||||
message IntRequest {
|
||||
int32 value = 1;
|
||||
}
|
||||
|
||||
message NodeRunResult {
|
||||
string unique_id = 1;
|
||||
string output = 2;
|
||||
int32 status = 3;
|
||||
google.protobuf.Struct test_result = 4;
|
||||
}
|
||||
|
||||
message FullReplaceRequest {
|
||||
google.protobuf.Struct connections = 1;
|
||||
google.protobuf.Struct profiles = 2;
|
||||
}
|
||||
@@ -1,28 +0,0 @@
|
||||
from .exceptions import *
|
||||
from .node_service import NodeService
|
||||
from .profile_service import ProfileService
|
||||
from .execution_service import ExecutionService
|
||||
from .import_export_service import ImportExportService
|
||||
from .ai_service import AIService
|
||||
from .plugin_service import PluginService
|
||||
from .config_service import ConfigService
|
||||
from .system_service import SystemService
|
||||
|
||||
__all__ = [
|
||||
'NodeService',
|
||||
'ProfileService',
|
||||
'ExecutionService',
|
||||
'ImportExportService',
|
||||
'AIService',
|
||||
'PluginService',
|
||||
'ConfigService',
|
||||
'SystemService',
|
||||
'ConnpyError',
|
||||
'NodeNotFoundError',
|
||||
'NodeAlreadyExistsError',
|
||||
'ProfileNotFoundError',
|
||||
'ProfileAlreadyExistsError',
|
||||
'ExecutionError',
|
||||
'InvalidConfigurationError'
|
||||
]
|
||||
|
||||
@@ -1,53 +0,0 @@
|
||||
from .base import BaseService
|
||||
from .exceptions import InvalidConfigurationError
|
||||
|
||||
class AIService(BaseService):
|
||||
"""Business logic for interacting with AI agents and LLM configurations."""
|
||||
|
||||
def ask(self, input_text, dryrun=False, chat_history=None, status=None, debug=False, session_id=None, console=None, chunk_callback=None, confirm_handler=None, trust=False, **overrides):
|
||||
"""Send a prompt to the AI agent."""
|
||||
from connpy.ai import ai
|
||||
agent = ai(self.config, console=console, confirm_handler=confirm_handler, trust=trust, **overrides)
|
||||
return agent.ask(input_text, dryrun, chat_history, status=status, debug=debug, session_id=session_id, chunk_callback=chunk_callback)
|
||||
|
||||
|
||||
def confirm(self, input_text, console=None):
|
||||
"""Ask for a safe confirmation of an action."""
|
||||
from connpy.ai import ai
|
||||
agent = ai(self.config, console=console)
|
||||
return agent.confirm(input_text)
|
||||
|
||||
|
||||
def list_sessions(self):
|
||||
"""Return a list of all saved AI sessions."""
|
||||
from connpy.ai import ai
|
||||
agent = ai(self.config)
|
||||
return agent._get_sessions()
|
||||
|
||||
def delete_session(self, session_id):
|
||||
"""Delete an AI session by ID."""
|
||||
import os
|
||||
sessions_dir = os.path.join(self.config.defaultdir, "ai_sessions")
|
||||
path = os.path.join(sessions_dir, f"{session_id}.json")
|
||||
if os.path.exists(path):
|
||||
os.remove(path)
|
||||
else:
|
||||
raise InvalidConfigurationError(f"Session '{session_id}' not found.")
|
||||
|
||||
def configure_provider(self, provider, model=None, api_key=None):
|
||||
"""Update AI provider settings in the configuration."""
|
||||
settings = self.config.config.get("ai", {})
|
||||
if model:
|
||||
settings[f"{provider}_model"] = model
|
||||
if api_key:
|
||||
settings[f"{provider}_api_key"] = api_key
|
||||
|
||||
self.config.config["ai"] = settings
|
||||
self.config._saveconfig(self.config.file)
|
||||
|
||||
def load_session_data(self, session_id):
|
||||
"""Load a session's raw data by ID."""
|
||||
from connpy.ai import ai
|
||||
agent = ai(self.config)
|
||||
return agent.load_session_data(session_id)
|
||||
|
||||
@@ -1,33 +0,0 @@
|
||||
from connpy.hooks import MethodHook
|
||||
|
||||
class BaseService:
|
||||
"""Base class for all connpy services, providing common configuration access."""
|
||||
|
||||
def __init__(self, config=None):
|
||||
"""
|
||||
Initialize the service.
|
||||
|
||||
Args:
|
||||
config: An instance of configfile (or None to instantiate a new one/use global context).
|
||||
"""
|
||||
from connpy import configfile
|
||||
self.config = config or configfile()
|
||||
self.hooks = MethodHook
|
||||
self.reserved_names = []
|
||||
|
||||
def set_reserved_names(self, names):
|
||||
"""Inject a list of reserved names (e.g. from the CLI)."""
|
||||
self.reserved_names = names
|
||||
|
||||
def _validate_node_name(self, unique_id):
|
||||
"""Check if the node name in unique_id is reserved."""
|
||||
from .exceptions import ReservedNameError
|
||||
if not self.reserved_names:
|
||||
return
|
||||
|
||||
uniques = self.config._explode_unique(unique_id)
|
||||
if uniques and "id" in uniques:
|
||||
# We only validate the 'id' (the actual node name), folders are prefixed with @
|
||||
node_name = uniques["id"]
|
||||
if node_name in self.reserved_names:
|
||||
raise ReservedNameError(f"Node name '{node_name}' is a reserved command.")
|
||||
@@ -1,82 +0,0 @@
|
||||
import os
|
||||
import shutil
|
||||
import base64
|
||||
from typing import Any, Dict
|
||||
from Crypto.PublicKey import RSA
|
||||
from Crypto.Cipher import PKCS1_OAEP
|
||||
from .base import BaseService
|
||||
from .exceptions import ConnpyError, InvalidConfigurationError, NodeNotFoundError
|
||||
|
||||
|
||||
class ConfigService(BaseService):
|
||||
"""Business logic for general application settings and state configuration."""
|
||||
|
||||
def get_settings(self) -> Dict[str, Any]:
|
||||
"""Get the global configuration settings block."""
|
||||
settings = self.config.config.copy()
|
||||
settings["configfolder"] = self.config.defaultdir
|
||||
return settings
|
||||
|
||||
def get_default_dir(self) -> str:
|
||||
"""Get the default configuration directory."""
|
||||
return self.config.defaultdir
|
||||
|
||||
def set_config_folder(self, folder_path: str):
|
||||
"""Set the default location for config file by writing to ~/.config/conn/.folder"""
|
||||
if not os.path.isdir(folder_path):
|
||||
raise ConnpyError(f"readable_dir:{folder_path} is not a valid path")
|
||||
|
||||
pathfile = os.path.join(self.config.anchor_path, ".folder")
|
||||
folder = os.path.abspath(folder_path).rstrip('/')
|
||||
|
||||
try:
|
||||
with open(pathfile, "w") as f:
|
||||
f.write(str(folder))
|
||||
except Exception as e:
|
||||
raise ConnpyError(f"Failed to save config folder: {e}")
|
||||
|
||||
def update_setting(self, key, value):
|
||||
"""Update a setting in the configuration file."""
|
||||
self.config.config[key] = value
|
||||
self.config._saveconfig(self.config.file)
|
||||
|
||||
def encrypt_password(self, password):
|
||||
"""Encrypt a password using the application's configuration encryption key."""
|
||||
return self.config.encrypt(password)
|
||||
|
||||
def apply_theme_from_file(self, theme_input):
|
||||
"""Apply 'dark', 'light' theme or load a YAML theme file and save it to the configuration."""
|
||||
import yaml
|
||||
from ..printer import STYLES, LIGHT_THEME
|
||||
|
||||
if theme_input == "dark":
|
||||
valid_styles = {}
|
||||
self.update_setting("theme", valid_styles)
|
||||
return valid_styles
|
||||
elif theme_input == "light":
|
||||
valid_styles = LIGHT_THEME.copy()
|
||||
self.update_setting("theme", valid_styles)
|
||||
return valid_styles
|
||||
|
||||
if not os.path.exists(theme_input):
|
||||
raise InvalidConfigurationError(f"Theme file '{theme_input}' not found.")
|
||||
|
||||
try:
|
||||
with open(theme_input, 'r') as f:
|
||||
user_styles = yaml.safe_load(f)
|
||||
except Exception as e:
|
||||
raise InvalidConfigurationError(f"Failed to parse theme file: {e}")
|
||||
|
||||
if not isinstance(user_styles, dict):
|
||||
raise InvalidConfigurationError("Theme file must be a YAML dictionary.")
|
||||
|
||||
# Filter for valid styles only (prevent junk in config)
|
||||
valid_styles = {k: v for k, v in user_styles.items() if k in STYLES}
|
||||
|
||||
if not valid_styles:
|
||||
raise InvalidConfigurationError("No valid style keys found in theme file.")
|
||||
|
||||
# Persist and return merged styles
|
||||
self.update_setting("theme", valid_styles)
|
||||
return valid_styles
|
||||
|
||||
@@ -1,87 +0,0 @@
|
||||
import re
|
||||
from typing import List, Dict, Any
|
||||
from .base import BaseService
|
||||
from ..hooks import MethodHook
|
||||
from .. import printer
|
||||
|
||||
class ContextService(BaseService):
|
||||
"""Business logic for managing and applying regex-based contexts locally."""
|
||||
|
||||
@property
|
||||
def contexts(self) -> Dict[str, List[str]]:
|
||||
return self.config.config.get("contexts", {"all": [".*"]})
|
||||
|
||||
@property
|
||||
def current_context(self) -> str:
|
||||
return self.config.config.get("current_context", "all")
|
||||
|
||||
def list_contexts(self) -> List[Dict[str, Any]]:
|
||||
result = []
|
||||
for name in self.contexts.keys():
|
||||
result.append({
|
||||
"name": name,
|
||||
"active": (name == self.current_context),
|
||||
"regexes": self.contexts[name]
|
||||
})
|
||||
return result
|
||||
|
||||
def add_context(self, name: str, regexes: List[str]):
|
||||
if not name.isalnum():
|
||||
raise ValueError("Context name must be alphanumeric")
|
||||
|
||||
ctxs = self.contexts
|
||||
if name in ctxs:
|
||||
raise ValueError(f"Context '{name}' already exists")
|
||||
|
||||
ctxs[name] = regexes
|
||||
self.config.config["contexts"] = ctxs
|
||||
self.config._saveconfig(self.config.file)
|
||||
|
||||
def update_context(self, name: str, regexes: List[str]):
|
||||
if name == "all":
|
||||
raise ValueError("Cannot modify default context 'all'")
|
||||
|
||||
ctxs = self.contexts
|
||||
if name not in ctxs:
|
||||
raise ValueError(f"Context '{name}' does not exist")
|
||||
|
||||
ctxs[name] = regexes
|
||||
self.config.config["contexts"] = ctxs
|
||||
self.config._saveconfig(self.config.file)
|
||||
|
||||
def delete_context(self, name: str):
|
||||
if name == "all":
|
||||
raise ValueError("Cannot delete default context 'all'")
|
||||
if name == self.current_context:
|
||||
raise ValueError(f"Cannot delete active context '{name}'")
|
||||
|
||||
ctxs = self.contexts
|
||||
if name not in ctxs:
|
||||
raise ValueError(f"Context '{name}' does not exist")
|
||||
|
||||
del ctxs[name]
|
||||
self.config.config["contexts"] = ctxs
|
||||
self.config._saveconfig(self.config.file)
|
||||
|
||||
def set_active_context(self, name: str):
|
||||
if name not in self.contexts:
|
||||
raise ValueError(f"Context '{name}' does not exist")
|
||||
|
||||
self.config.config["current_context"] = name
|
||||
self.config._saveconfig(self.config.file)
|
||||
|
||||
def get_active_regexes(self) -> List[re.Pattern]:
|
||||
patterns = self.contexts.get(self.current_context, [".*"])
|
||||
return [re.compile(p) for p in patterns]
|
||||
|
||||
def _match_any(self, node_name: str, patterns: List[re.Pattern]) -> bool:
|
||||
return any(p.match(node_name) for p in patterns)
|
||||
|
||||
# Hook handlers for filtering
|
||||
def filter_node_list(self, *args, **kwargs):
|
||||
patterns = self.get_active_regexes()
|
||||
return [node for node in kwargs["result"] if self._match_any(node, patterns)]
|
||||
|
||||
def filter_node_dict(self, *args, **kwargs):
|
||||
patterns = self.get_active_regexes()
|
||||
return {k: v for k, v in kwargs["result"].items() if self._match_any(k, patterns)}
|
||||
@@ -1,31 +0,0 @@
|
||||
class ConnpyError(Exception):
|
||||
"""Base exception for all connpy services."""
|
||||
pass
|
||||
|
||||
class NodeNotFoundError(ConnpyError):
|
||||
"""Raised when a connection or folder is not found."""
|
||||
pass
|
||||
|
||||
class NodeAlreadyExistsError(ConnpyError):
|
||||
"""Raised when a node or folder already exists."""
|
||||
pass
|
||||
|
||||
class ProfileNotFoundError(ConnpyError):
|
||||
"""Raised when a profile is not found."""
|
||||
pass
|
||||
|
||||
class ProfileAlreadyExistsError(ConnpyError):
|
||||
"""Raised when a profile with the same name already exists."""
|
||||
pass
|
||||
|
||||
class ExecutionError(ConnpyError):
|
||||
"""Raised when an execution fails or returns error."""
|
||||
pass
|
||||
|
||||
class InvalidConfigurationError(ConnpyError):
|
||||
"""Raised when data or configuration input is invalid."""
|
||||
pass
|
||||
|
||||
class ReservedNameError(ConnpyError):
|
||||
"""Raised when a node name conflicts with a reserved command."""
|
||||
pass
|
||||
@@ -1,159 +0,0 @@
|
||||
from typing import List, Dict, Any, Callable, Optional
|
||||
import os
|
||||
import yaml
|
||||
from .base import BaseService
|
||||
from connpy.core import nodes as Nodes
|
||||
from .exceptions import ConnpyError
|
||||
|
||||
class ExecutionService(BaseService):
|
||||
"""Business logic for executing commands on nodes and running automation scripts."""
|
||||
|
||||
def run_commands(
|
||||
self,
|
||||
nodes_filter: str,
|
||||
commands: List[str],
|
||||
variables: Optional[Dict[str, Any]] = None,
|
||||
parallel: int = 10,
|
||||
timeout: int = 10,
|
||||
folder: Optional[str] = None,
|
||||
prompt: Optional[str] = None,
|
||||
on_node_complete: Optional[Callable] = None,
|
||||
logger: Optional[Callable] = None,
|
||||
name: Optional[str] = None
|
||||
) -> Dict[str, str]:
|
||||
|
||||
"""Execute commands on a set of nodes."""
|
||||
try:
|
||||
matched_names = self.config._getallnodes(nodes_filter)
|
||||
if not matched_names:
|
||||
raise ConnpyError(f"No nodes found matching filter: {nodes_filter}")
|
||||
|
||||
node_data = self.config.getitems(matched_names, extract=True)
|
||||
executor = Nodes(node_data, config=self.config)
|
||||
self.last_executor = executor
|
||||
|
||||
results = executor.run(
|
||||
commands=commands,
|
||||
vars=variables,
|
||||
parallel=parallel,
|
||||
timeout=timeout,
|
||||
folder=folder,
|
||||
prompt=prompt,
|
||||
on_complete=on_node_complete,
|
||||
logger=logger
|
||||
)
|
||||
|
||||
# Combine output and status for the caller
|
||||
full_results = {}
|
||||
for unique in results:
|
||||
full_results[unique] = {
|
||||
"output": results[unique],
|
||||
"status": executor.status.get(unique, 1)
|
||||
}
|
||||
|
||||
return full_results
|
||||
except Exception as e:
|
||||
raise ConnpyError(f"Execution failed: {e}")
|
||||
|
||||
def test_commands(
|
||||
self,
|
||||
nodes_filter: str,
|
||||
commands: List[str],
|
||||
expected: List[str],
|
||||
variables: Optional[Dict[str, Any]] = None,
|
||||
parallel: int = 10,
|
||||
timeout: int = 10,
|
||||
folder: Optional[str] = None,
|
||||
prompt: Optional[str] = None,
|
||||
on_node_complete: Optional[Callable] = None,
|
||||
logger: Optional[Callable] = None,
|
||||
name: Optional[str] = None
|
||||
) -> Dict[str, Dict[str, bool]]:
|
||||
|
||||
"""Run commands and verify expected output on a set of nodes."""
|
||||
try:
|
||||
matched_names = self.config._getallnodes(nodes_filter)
|
||||
if not matched_names:
|
||||
raise ConnpyError(f"No nodes found matching filter: {nodes_filter}")
|
||||
|
||||
node_data = self.config.getitems(matched_names, extract=True)
|
||||
executor = Nodes(node_data, config=self.config)
|
||||
self.last_executor = executor
|
||||
|
||||
results = executor.test(
|
||||
commands=commands,
|
||||
expected=expected,
|
||||
vars=variables,
|
||||
parallel=parallel,
|
||||
timeout=timeout,
|
||||
folder=folder,
|
||||
prompt=prompt,
|
||||
on_complete=on_node_complete,
|
||||
logger=logger
|
||||
)
|
||||
return results
|
||||
except Exception as e:
|
||||
raise ConnpyError(f"Testing failed: {e}")
|
||||
|
||||
def run_cli_script(self, nodes_filter: str, script_path: str, parallel: int = 10) -> Dict[str, str]:
|
||||
"""Run a plain-text script containing one command per line."""
|
||||
if not os.path.exists(script_path):
|
||||
raise ConnpyError(f"Script file not found: {script_path}")
|
||||
|
||||
try:
|
||||
with open(script_path, "r") as f:
|
||||
commands = [line.strip() for line in f if line.strip()]
|
||||
except Exception as e:
|
||||
raise ConnpyError(f"Failed to read script {script_path}: {e}")
|
||||
|
||||
return self.run_commands(nodes_filter, commands, parallel=parallel)
|
||||
|
||||
def run_yaml_playbook(self, playbook_data: str, parallel: int = 10) -> Dict[str, Any]:
|
||||
"""Run a structured Connpy YAML automation playbook (from path or content)."""
|
||||
playbook = None
|
||||
if playbook_data.startswith("---YAML---\n"):
|
||||
try:
|
||||
content = playbook_data[len("---YAML---\n"):]
|
||||
playbook = yaml.load(content, Loader=yaml.FullLoader)
|
||||
except Exception as e:
|
||||
raise ConnpyError(f"Failed to parse YAML content: {e}")
|
||||
else:
|
||||
if not os.path.exists(playbook_data):
|
||||
raise ConnpyError(f"Playbook file not found: {playbook_data}")
|
||||
try:
|
||||
with open(playbook_data, "r") as f:
|
||||
playbook = yaml.load(f, Loader=yaml.FullLoader)
|
||||
except Exception as e:
|
||||
raise ConnpyError(f"Failed to load playbook {playbook_data}: {e}")
|
||||
|
||||
# Basic validation
|
||||
if not isinstance(playbook, dict) or "nodes" not in playbook or "commands" not in playbook:
|
||||
raise ConnpyError("Invalid playbook format: missing 'nodes' or 'commands' keys.")
|
||||
|
||||
action = playbook.get("action", "run")
|
||||
options = playbook.get("options", {})
|
||||
|
||||
# Extract all fields similar to RunHandler.cli_run
|
||||
exec_args = {
|
||||
"nodes_filter": playbook["nodes"],
|
||||
"commands": playbook["commands"],
|
||||
"variables": playbook.get("variables"),
|
||||
"parallel": options.get("parallel", parallel),
|
||||
"timeout": playbook.get("timeout", options.get("timeout", 10)),
|
||||
"prompt": options.get("prompt"),
|
||||
"name": playbook.get("name", "Task")
|
||||
}
|
||||
|
||||
# Map 'output' field to folder path if it's not stdout/null
|
||||
output_cfg = playbook.get("output")
|
||||
if output_cfg not in [None, "stdout"]:
|
||||
exec_args["folder"] = output_cfg
|
||||
|
||||
if action == "run":
|
||||
return self.run_commands(**exec_args)
|
||||
elif action == "test":
|
||||
exec_args["expected"] = playbook.get("expected", [])
|
||||
return self.test_commands(**exec_args)
|
||||
else:
|
||||
raise ConnpyError(f"Unsupported playbook action: {action}")
|
||||
|
||||
@@ -1,115 +0,0 @@
|
||||
from .base import BaseService
|
||||
import yaml
|
||||
import os
|
||||
from copy import deepcopy
|
||||
from .exceptions import InvalidConfigurationError, NodeNotFoundError, ReservedNameError
|
||||
from ..configfile import NoAliasDumper
|
||||
|
||||
|
||||
class ImportExportService(BaseService):
|
||||
"""Business logic for YAML/JSON inventory import and export."""
|
||||
|
||||
def export_to_file(self, file_path, folders=None):
|
||||
"""Export nodes/folders to a YAML file."""
|
||||
if os.path.exists(file_path):
|
||||
raise InvalidConfigurationError(f"File '{file_path}' already exists.")
|
||||
|
||||
data = self.export_to_dict(folders)
|
||||
try:
|
||||
with open(file_path, "w") as f:
|
||||
yaml.dump(data, f, Dumper=NoAliasDumper, default_flow_style=False)
|
||||
except OSError as e:
|
||||
raise InvalidConfigurationError(f"Failed to export to '{file_path}': {e}")
|
||||
|
||||
def export_to_dict(self, folders=None):
|
||||
"""Export nodes/folders to a dictionary."""
|
||||
if not folders:
|
||||
return deepcopy(self.config.connections)
|
||||
else:
|
||||
# Validate folders exist
|
||||
for f in folders:
|
||||
if f != "@" and f not in self.config._getallfolders():
|
||||
raise NodeNotFoundError(f"Folder '{f}' not found.")
|
||||
|
||||
flat = self.config._getallnodesfull(folders, extract=False)
|
||||
nested = {}
|
||||
for k, v in flat.items():
|
||||
uniques = self.config._explode_unique(k)
|
||||
if not uniques:
|
||||
continue
|
||||
|
||||
if "folder" in uniques and "subfolder" in uniques:
|
||||
f_name = uniques["folder"]
|
||||
s_name = uniques["subfolder"]
|
||||
i_name = uniques["id"]
|
||||
|
||||
if f_name not in nested:
|
||||
nested[f_name] = {"type": "folder"}
|
||||
if s_name not in nested[f_name]:
|
||||
nested[f_name][s_name] = {"type": "subfolder"}
|
||||
|
||||
nested[f_name][s_name][i_name] = v
|
||||
|
||||
elif "folder" in uniques:
|
||||
f_name = uniques["folder"]
|
||||
i_name = uniques["id"]
|
||||
|
||||
if f_name not in nested:
|
||||
nested[f_name] = {"type": "folder"}
|
||||
|
||||
nested[f_name][i_name] = v
|
||||
else:
|
||||
i_name = uniques["id"]
|
||||
nested[i_name] = v
|
||||
|
||||
return nested
|
||||
|
||||
def import_from_file(self, file_path):
|
||||
"""Import nodes/folders from a YAML file."""
|
||||
if not os.path.exists(file_path):
|
||||
raise InvalidConfigurationError(f"File '{file_path}' does not exist.")
|
||||
|
||||
try:
|
||||
with open(file_path, "r") as f:
|
||||
data = yaml.load(f, Loader=yaml.FullLoader)
|
||||
self.import_from_dict(data)
|
||||
except Exception as e:
|
||||
raise InvalidConfigurationError(f"Failed to read/parse import file: {e}")
|
||||
|
||||
def import_from_dict(self, data):
|
||||
"""Import nodes/folders from a dictionary."""
|
||||
if not isinstance(data, dict):
|
||||
raise InvalidConfigurationError("Invalid import data format: expected a dictionary of nodes.")
|
||||
|
||||
def _traverse_import(node_data, current_folder='', current_subfolder=''):
|
||||
for k, v in node_data.items():
|
||||
if k == "type":
|
||||
continue
|
||||
if isinstance(v, dict):
|
||||
node_type = v.get("type", "connection")
|
||||
if node_type == "folder":
|
||||
self.config._folder_add(folder=k)
|
||||
_traverse_import(v, current_folder=k, current_subfolder='')
|
||||
elif node_type == "subfolder":
|
||||
self.config._folder_add(folder=current_folder, subfolder=k)
|
||||
_traverse_import(v, current_folder=current_folder, current_subfolder=k)
|
||||
elif node_type == "connection":
|
||||
unique_id = k
|
||||
if current_subfolder:
|
||||
unique_id = f"{k}@{current_subfolder}@{current_folder}"
|
||||
elif current_folder:
|
||||
unique_id = f"{k}@{current_folder}"
|
||||
self._validate_node_name(unique_id)
|
||||
|
||||
kwargs = deepcopy(v)
|
||||
kwargs['id'] = k
|
||||
kwargs['folder'] = current_folder
|
||||
kwargs['subfolder'] = current_subfolder
|
||||
|
||||
self.config._connections_add(**kwargs)
|
||||
else:
|
||||
# Invalid format skip
|
||||
pass
|
||||
|
||||
_traverse_import(data)
|
||||
self.config._saveconfig(self.config.file)
|
||||
@@ -1,273 +0,0 @@
|
||||
import re
|
||||
from .base import BaseService
|
||||
from .exceptions import (
|
||||
NodeNotFoundError, NodeAlreadyExistsError,
|
||||
InvalidConfigurationError, ReservedNameError
|
||||
)
|
||||
|
||||
class NodeService(BaseService):
|
||||
def __init__(self, config=None):
|
||||
super().__init__(config)
|
||||
|
||||
|
||||
def list_nodes(self, filter_str=None, format_str=None):
|
||||
"""Return a listed filtered by regex match and formatted if needed."""
|
||||
nodes = self.config._getallnodes()
|
||||
case_sensitive = self.config.config.get("case", False)
|
||||
|
||||
if filter_str:
|
||||
flags = re.IGNORECASE if not case_sensitive else 0
|
||||
nodes = [n for n in nodes if re.search(filter_str, n, flags)]
|
||||
|
||||
if not format_str:
|
||||
return nodes
|
||||
|
||||
from .profile_service import ProfileService
|
||||
profile_service = ProfileService(self.config)
|
||||
|
||||
formatted_nodes = []
|
||||
for n_id in nodes:
|
||||
# Use ProfileService to resolve profiles for dynamic formatting
|
||||
details = self.config.getitem(n_id, extract=False)
|
||||
if details:
|
||||
details = profile_service.resolve_node_data(details)
|
||||
|
||||
name = n_id.split("@")[0]
|
||||
location = n_id.partition("@")[2] or "root"
|
||||
|
||||
# Prepare context for .format() with all details
|
||||
context = details.copy()
|
||||
context.update({
|
||||
"name": name,
|
||||
"NAME": name.upper(),
|
||||
"location": location,
|
||||
"LOCATION": location.upper(),
|
||||
})
|
||||
|
||||
# Add exploded uniques (id, folder, subfolder)
|
||||
uniques = self.config._explode_unique(n_id)
|
||||
if uniques:
|
||||
context.update(uniques)
|
||||
|
||||
# Add uppercase versions of all keys for convenience
|
||||
for k, v in list(context.items()):
|
||||
if isinstance(v, str):
|
||||
context[k.upper()] = v.upper()
|
||||
|
||||
try:
|
||||
formatted_nodes.append(format_str.format(**context))
|
||||
except (KeyError, IndexError, ValueError):
|
||||
# Fallback to original string if format fails
|
||||
formatted_nodes.append(n_id)
|
||||
return formatted_nodes
|
||||
|
||||
def list_folders(self, filter_str=None):
|
||||
"""Return all unique folders, optionally filtered by regex."""
|
||||
folders = self.config._getallfolders()
|
||||
case_sensitive = self.config.config.get("case", False)
|
||||
|
||||
if filter_str:
|
||||
if filter_str.startswith("@"):
|
||||
if not case_sensitive:
|
||||
folders = [f for f in folders if f.lower() == filter_str.lower()]
|
||||
else:
|
||||
folders = [f for f in folders if f == filter_str]
|
||||
else:
|
||||
flags = re.IGNORECASE if not case_sensitive else 0
|
||||
folders = [f for f in folders if re.search(filter_str, f, flags)]
|
||||
return folders
|
||||
|
||||
def get_node_details(self, unique_id):
|
||||
"""Return full configuration dictionary for a specific node."""
|
||||
try:
|
||||
details = self.config.getitem(unique_id)
|
||||
if not details:
|
||||
raise NodeNotFoundError(f"Node '{unique_id}' not found.")
|
||||
return details
|
||||
except (KeyError, TypeError):
|
||||
raise NodeNotFoundError(f"Node '{unique_id}' not found.")
|
||||
|
||||
def explode_unique(self, unique_id):
|
||||
"""Explode a unique ID into a dictionary of its parts."""
|
||||
return self.config._explode_unique(unique_id)
|
||||
|
||||
def generate_cache(self, nodes=None, folders=None, profiles=None):
|
||||
"""Generate and update the internal nodes cache."""
|
||||
self.config._generate_nodes_cache(nodes=nodes, folders=folders, profiles=profiles)
|
||||
|
||||
def validate_parent_folder(self, unique_id, is_folder=False):
|
||||
"""Check if parent folder exists for a given node unique ID."""
|
||||
if is_folder:
|
||||
uniques = self.config._explode_unique(unique_id)
|
||||
if uniques and "subfolder" in uniques and "folder" in uniques:
|
||||
parent_folder = f"@{uniques['folder']}"
|
||||
if parent_folder not in self.config._getallfolders():
|
||||
raise NodeNotFoundError(f"Folder '{parent_folder}' not found.")
|
||||
else:
|
||||
node_folder = unique_id.partition("@")[2]
|
||||
if node_folder:
|
||||
parent_folder = f"@{node_folder}"
|
||||
if parent_folder not in self.config._getallfolders():
|
||||
raise NodeNotFoundError(f"Folder '{parent_folder}' not found.")
|
||||
|
||||
|
||||
def add_node(self, unique_id, data, is_folder=False):
|
||||
"""Logic for adding a new node or folder to configuration."""
|
||||
if not is_folder:
|
||||
self._validate_node_name(unique_id)
|
||||
|
||||
all_nodes = self.config._getallnodes()
|
||||
all_folders = self.config._getallfolders()
|
||||
|
||||
if is_folder:
|
||||
if unique_id in all_folders:
|
||||
raise NodeAlreadyExistsError(f"Folder '{unique_id}' already exists.")
|
||||
uniques = self.config._explode_unique(unique_id)
|
||||
if not uniques:
|
||||
raise InvalidConfigurationError(f"Invalid folder name '{unique_id}'.")
|
||||
|
||||
# Check if parent folder exists when creating a subfolder
|
||||
if "subfolder" in uniques:
|
||||
self.validate_parent_folder(unique_id, is_folder=True)
|
||||
|
||||
self.config._folder_add(**uniques)
|
||||
self.config._saveconfig(self.config.file)
|
||||
else:
|
||||
if unique_id in all_nodes:
|
||||
raise NodeAlreadyExistsError(f"Node '{unique_id}' already exists.")
|
||||
|
||||
# Check if parent folder exists when creating a node in a folder
|
||||
self.validate_parent_folder(unique_id)
|
||||
|
||||
# Ensure 'id' is in data for config._connections_add
|
||||
if "id" not in data:
|
||||
uniques = self.config._explode_unique(unique_id)
|
||||
if uniques and "id" in uniques:
|
||||
data["id"] = uniques["id"]
|
||||
|
||||
self.config._connections_add(**data)
|
||||
self.config._saveconfig(self.config.file)
|
||||
|
||||
def update_node(self, unique_id, data):
|
||||
"""Explicitly update an existing node."""
|
||||
all_nodes = self.config._getallnodes()
|
||||
if unique_id not in all_nodes:
|
||||
raise NodeNotFoundError(f"Node '{unique_id}' not found.")
|
||||
|
||||
# Ensure 'id' is in data for config._connections_add
|
||||
if "id" not in data:
|
||||
uniques = self.config._explode_unique(unique_id)
|
||||
if uniques:
|
||||
data["id"] = uniques["id"]
|
||||
|
||||
# config._connections_add actually handles updates if ID exists correctly
|
||||
self.config._connections_add(**data)
|
||||
self.config._saveconfig(self.config.file)
|
||||
|
||||
def delete_node(self, unique_id, is_folder=False):
|
||||
"""Logic for deleting a node or folder."""
|
||||
if is_folder:
|
||||
uniques = self.config._explode_unique(unique_id)
|
||||
if not uniques:
|
||||
raise NodeNotFoundError(f"Folder '{unique_id}' not found or invalid.")
|
||||
self.config._folder_del(**uniques)
|
||||
else:
|
||||
uniques = self.config._explode_unique(unique_id)
|
||||
if not uniques:
|
||||
raise NodeNotFoundError(f"Node '{unique_id}' not found or invalid.")
|
||||
self.config._connections_del(**uniques)
|
||||
|
||||
self.config._saveconfig(self.config.file)
|
||||
|
||||
def connect_node(self, unique_id, sftp=False, debug=False, logger=None):
|
||||
"""Interact with a node directly."""
|
||||
from connpy.core import node
|
||||
from .profile_service import ProfileService
|
||||
|
||||
node_data = self.config.getitem(unique_id, extract=False)
|
||||
if not node_data:
|
||||
raise NodeNotFoundError(f"Node '{unique_id}' not found.")
|
||||
|
||||
# Resolve profiles
|
||||
profile_service = ProfileService(self.config)
|
||||
resolved_data = profile_service.resolve_node_data(node_data)
|
||||
|
||||
n = node(unique_id, **resolved_data, config=self.config)
|
||||
if sftp:
|
||||
n.protocol = "sftp"
|
||||
|
||||
n.interact(debug=debug, logger=logger)
|
||||
|
||||
def move_node(self, src_id, dst_id, copy=False):
|
||||
"""Move or copy a node."""
|
||||
self._validate_node_name(dst_id)
|
||||
|
||||
node_data = self.config.getitem(src_id)
|
||||
if not node_data:
|
||||
raise NodeNotFoundError(f"Source node '{src_id}' not found.")
|
||||
|
||||
if dst_id in self.config._getallnodes():
|
||||
raise NodeAlreadyExistsError(f"Destination node '{dst_id}' already exists.")
|
||||
|
||||
new_uniques = self.config._explode_unique(dst_id)
|
||||
if not new_uniques:
|
||||
raise InvalidConfigurationError(f"Invalid destination format '{dst_id}'.")
|
||||
|
||||
new_node_data = node_data.copy()
|
||||
new_node_data.update(new_uniques)
|
||||
|
||||
self.config._connections_add(**new_node_data)
|
||||
|
||||
if not copy:
|
||||
src_uniques = self.config._explode_unique(src_id)
|
||||
self.config._connections_del(**src_uniques)
|
||||
|
||||
self.config._saveconfig(self.config.file)
|
||||
|
||||
def bulk_add(self, ids, hosts, common_data):
|
||||
"""Add multiple nodes with shared common configuration."""
|
||||
count = 0
|
||||
all_nodes = self.config._getallnodes()
|
||||
|
||||
for i, uid in enumerate(ids):
|
||||
if uid in all_nodes:
|
||||
continue
|
||||
|
||||
try:
|
||||
self._validate_node_name(uid)
|
||||
except ReservedNameError:
|
||||
# For bulk, we might want to just skip or log.
|
||||
# CLI caller will handle if it wants to be strict.
|
||||
continue
|
||||
|
||||
host = hosts[i] if i < len(hosts) else hosts[0]
|
||||
uniques = self.config._explode_unique(uid)
|
||||
if not uniques:
|
||||
continue
|
||||
|
||||
node_data = common_data.copy()
|
||||
node_data.pop("ids", None)
|
||||
node_data.pop("location", None)
|
||||
node_data.update(uniques)
|
||||
node_data["host"] = host
|
||||
node_data["type"] = "connection"
|
||||
|
||||
self.config._connections_add(**node_data)
|
||||
count += 1
|
||||
|
||||
if count > 0:
|
||||
self.config._saveconfig(self.config.file)
|
||||
return count
|
||||
|
||||
def full_replace(self, connections, profiles):
|
||||
"""Replace all connections and profiles with new data."""
|
||||
self.config.connections = connections
|
||||
self.config.profiles = profiles
|
||||
self.config._saveconfig(self.config.file)
|
||||
|
||||
def get_inventory(self):
|
||||
"""Return a full snapshot of connections and profiles."""
|
||||
return {
|
||||
"connections": self.config.connections,
|
||||
"profiles": self.config.profiles
|
||||
}
|
||||
@@ -1,276 +0,0 @@
|
||||
from .base import BaseService
|
||||
import yaml
|
||||
import os
|
||||
from .exceptions import InvalidConfigurationError, NodeNotFoundError
|
||||
|
||||
|
||||
class PluginService(BaseService):
|
||||
"""Business logic for enabling, disabling, and listing plugins."""
|
||||
|
||||
def list_plugins(self):
|
||||
"""List all core and user-defined plugins with their status and hash."""
|
||||
import os
|
||||
import hashlib
|
||||
|
||||
# Check for user plugins directory
|
||||
plugin_dir = os.path.join(self.config.defaultdir, "plugins")
|
||||
# Check for core plugins directory
|
||||
core_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), "..", "core_plugins")
|
||||
|
||||
all_plugin_info = {}
|
||||
|
||||
def get_hash(path):
|
||||
try:
|
||||
with open(path, "rb") as f:
|
||||
return hashlib.md5(f.read()).hexdigest()
|
||||
except Exception:
|
||||
return ""
|
||||
|
||||
# User plugins
|
||||
if os.path.exists(plugin_dir):
|
||||
for f in os.listdir(plugin_dir):
|
||||
if f.endswith(".py"):
|
||||
name = f[:-3]
|
||||
path = os.path.join(plugin_dir, f)
|
||||
all_plugin_info[name] = {"enabled": True, "hash": get_hash(path)}
|
||||
elif f.endswith(".py.bkp"):
|
||||
name = f[:-7]
|
||||
all_plugin_info[name] = {"enabled": False}
|
||||
|
||||
return all_plugin_info
|
||||
|
||||
def add_plugin(self, name, source_file, update=False):
|
||||
"""Add or update a plugin from a local file."""
|
||||
import os
|
||||
import shutil
|
||||
from connpy.plugins import Plugins
|
||||
|
||||
if not name.isalpha() or not name.islower() or len(name) > 15:
|
||||
raise InvalidConfigurationError("Plugin name should be lowercase letters up to 15 characters.")
|
||||
|
||||
p_manager = Plugins()
|
||||
# Check for bad script
|
||||
error = p_manager.verify_script(source_file)
|
||||
if error:
|
||||
raise InvalidConfigurationError(f"Invalid plugin script: {error}")
|
||||
|
||||
self._save_plugin_file(name, source_file, update, is_path=True)
|
||||
|
||||
def add_plugin_from_bytes(self, name, content, update=False):
|
||||
"""Add or update a plugin from bytes (gRPC)."""
|
||||
import tempfile
|
||||
import os
|
||||
|
||||
if not name.isalpha() or not name.islower() or len(name) > 15:
|
||||
raise InvalidConfigurationError("Plugin name should be lowercase letters up to 15 characters.")
|
||||
|
||||
# Write to temp file to verify script
|
||||
with tempfile.NamedTemporaryFile(suffix=".py", delete=False) as tmp:
|
||||
tmp.write(content)
|
||||
tmp_path = tmp.name
|
||||
|
||||
try:
|
||||
from connpy.plugins import Plugins
|
||||
p_manager = Plugins()
|
||||
error = p_manager.verify_script(tmp_path)
|
||||
if error:
|
||||
raise InvalidConfigurationError(f"Invalid plugin script: {error}")
|
||||
|
||||
self._save_plugin_file(name, tmp_path, update, is_path=True)
|
||||
finally:
|
||||
if os.path.exists(tmp_path):
|
||||
os.remove(tmp_path)
|
||||
|
||||
def _save_plugin_file(self, name, source, update=False, is_path=True):
|
||||
import os
|
||||
import shutil
|
||||
|
||||
plugin_dir = os.path.join(self.config.defaultdir, "plugins")
|
||||
os.makedirs(plugin_dir, exist_ok=True)
|
||||
|
||||
target_file = os.path.join(plugin_dir, f"{name}.py")
|
||||
backup_file = f"{target_file}.bkp"
|
||||
|
||||
if not update and (os.path.exists(target_file) or os.path.exists(backup_file)):
|
||||
raise InvalidConfigurationError(f"Plugin '{name}' already exists.")
|
||||
|
||||
try:
|
||||
if is_path:
|
||||
shutil.copy2(source, target_file)
|
||||
else:
|
||||
with open(target_file, "wb") as f:
|
||||
f.write(source)
|
||||
except OSError as e:
|
||||
raise InvalidConfigurationError(f"Failed to save plugin file: {e}")
|
||||
|
||||
def delete_plugin(self, name):
|
||||
"""Remove a plugin file permanently."""
|
||||
import os
|
||||
plugin_file = os.path.join(self.config.defaultdir, "plugins", f"{name}.py")
|
||||
disabled_file = f"{plugin_file}.bkp"
|
||||
|
||||
deleted = False
|
||||
for f in [plugin_file, disabled_file]:
|
||||
if os.path.exists(f):
|
||||
try:
|
||||
os.remove(f)
|
||||
deleted = True
|
||||
except OSError as e:
|
||||
raise InvalidConfigurationError(f"Failed to delete plugin file '{f}': {e}")
|
||||
|
||||
if not deleted:
|
||||
raise InvalidConfigurationError(f"Plugin '{name}' not found.")
|
||||
|
||||
def enable_plugin(self, name):
|
||||
"""Activate a plugin by renaming its backup file."""
|
||||
import os
|
||||
plugin_file = os.path.join(self.config.defaultdir, "plugins", f"{name}.py")
|
||||
disabled_file = f"{plugin_file}.bkp"
|
||||
|
||||
if os.path.exists(plugin_file):
|
||||
return False # Already enabled
|
||||
|
||||
if not os.path.exists(disabled_file):
|
||||
raise InvalidConfigurationError(f"Plugin '{name}' not found.")
|
||||
|
||||
try:
|
||||
os.rename(disabled_file, plugin_file)
|
||||
return True
|
||||
except OSError as e:
|
||||
raise InvalidConfigurationError(f"Failed to enable plugin '{name}': {e}")
|
||||
|
||||
def disable_plugin(self, name):
|
||||
"""Deactivate a plugin by renaming it to a backup file."""
|
||||
import os
|
||||
plugin_file = os.path.join(self.config.defaultdir, "plugins", f"{name}.py")
|
||||
disabled_file = f"{plugin_file}.bkp"
|
||||
|
||||
if os.path.exists(disabled_file):
|
||||
return False # Already disabled
|
||||
|
||||
if not os.path.exists(plugin_file):
|
||||
raise InvalidConfigurationError(f"Plugin '{name}' not found or is a core plugin.")
|
||||
|
||||
try:
|
||||
os.rename(plugin_file, disabled_file)
|
||||
return True
|
||||
except OSError as e:
|
||||
raise InvalidConfigurationError(f"Failed to disable plugin '{name}': {e}")
|
||||
|
||||
def get_plugin_source(self, name):
|
||||
import os
|
||||
from ..services.exceptions import InvalidConfigurationError
|
||||
|
||||
plugin_file = os.path.join(self.config.defaultdir, "plugins", f"{name}.py")
|
||||
core_path = os.path.dirname(os.path.realpath(__file__)) + f"/../core_plugins/{name}.py"
|
||||
|
||||
if os.path.exists(plugin_file):
|
||||
target = plugin_file
|
||||
elif os.path.exists(core_path):
|
||||
target = core_path
|
||||
else:
|
||||
raise InvalidConfigurationError(f"Plugin '{name}' not found")
|
||||
|
||||
with open(target, "r") as f:
|
||||
return f.read()
|
||||
|
||||
def invoke_plugin(self, name, args_dict):
|
||||
import sys, io
|
||||
from argparse import Namespace
|
||||
from ..services.exceptions import InvalidConfigurationError
|
||||
from connpy.plugins import Plugins
|
||||
class MockApp:
|
||||
is_mock = True
|
||||
def __init__(self, config):
|
||||
from ..core import node, nodes
|
||||
from ..ai import ai
|
||||
from ..services.provider import ServiceProvider
|
||||
|
||||
self.config = config
|
||||
self.node = node
|
||||
self.nodes = nodes
|
||||
self.ai = ai
|
||||
|
||||
self.services = ServiceProvider(config, mode="local")
|
||||
|
||||
# Get settings for CLI behavior
|
||||
settings = self.services.config_svc.get_settings()
|
||||
self.case = settings.get("case", False)
|
||||
self.fzf = settings.get("fzf", False)
|
||||
|
||||
try:
|
||||
self.nodes_list = self.services.nodes.list_nodes()
|
||||
self.folders = self.services.nodes.list_folders()
|
||||
self.profiles = self.services.profiles.list_profiles()
|
||||
except Exception:
|
||||
self.nodes_list = []
|
||||
self.folders = []
|
||||
self.profiles = []
|
||||
|
||||
args = Namespace(**args_dict)
|
||||
|
||||
p_manager = Plugins()
|
||||
import os
|
||||
plugin_file = os.path.join(self.config.defaultdir, "plugins", f"{name}.py")
|
||||
core_path = os.path.dirname(os.path.realpath(__file__)) + f"/../core_plugins/{name}.py"
|
||||
|
||||
if os.path.exists(plugin_file):
|
||||
target = plugin_file
|
||||
elif os.path.exists(core_path):
|
||||
target = core_path
|
||||
else:
|
||||
raise InvalidConfigurationError(f"Plugin '{name}' not found")
|
||||
|
||||
module = p_manager._import_from_path(target)
|
||||
parser = module.Parser().parser if hasattr(module, "Parser") else None
|
||||
|
||||
if "__func_name__" in args_dict and hasattr(module, args_dict["__func_name__"]):
|
||||
args.func = getattr(module, args_dict["__func_name__"])
|
||||
|
||||
app = MockApp(self.config)
|
||||
|
||||
from .. import printer
|
||||
from rich.console import Console
|
||||
|
||||
from rich.console import Console
|
||||
import queue
|
||||
import threading
|
||||
|
||||
q = queue.Queue()
|
||||
|
||||
class QueueIO(io.StringIO):
|
||||
def write(self, s):
|
||||
q.put(s)
|
||||
return len(s)
|
||||
def flush(self):
|
||||
pass
|
||||
|
||||
buf = QueueIO()
|
||||
old_console = printer._get_console()
|
||||
old_err_console = printer._get_err_console()
|
||||
|
||||
def run_plugin():
|
||||
printer.set_thread_console(Console(file=buf, theme=printer.connpy_theme, force_terminal=True))
|
||||
printer.set_thread_err_console(Console(file=buf, theme=printer.connpy_theme, force_terminal=True))
|
||||
printer.set_thread_stream(buf)
|
||||
try:
|
||||
if hasattr(module, "Entrypoint"):
|
||||
module.Entrypoint(args, parser, app)
|
||||
except BaseException as e:
|
||||
if not isinstance(e, SystemExit):
|
||||
import traceback
|
||||
printer.err_console.print(traceback.format_exc())
|
||||
finally:
|
||||
printer.set_thread_console(old_console)
|
||||
printer.set_thread_err_console(old_err_console)
|
||||
printer.set_thread_stream(None)
|
||||
q.put(None)
|
||||
|
||||
t = threading.Thread(target=run_plugin, daemon=True)
|
||||
t.start()
|
||||
|
||||
while True:
|
||||
item = q.get()
|
||||
if item is None:
|
||||
break
|
||||
yield item
|
||||
@@ -1,134 +0,0 @@
|
||||
from .base import BaseService
|
||||
from .exceptions import ProfileNotFoundError, ProfileAlreadyExistsError, InvalidConfigurationError
|
||||
|
||||
class ProfileService(BaseService):
|
||||
"""Business logic for node profiles management."""
|
||||
|
||||
def list_profiles(self, filter_str=None):
|
||||
"""List all profile names, optionally filtered."""
|
||||
profiles = list(self.config.profiles.keys())
|
||||
case_sensitive = self.config.config.get("case", False)
|
||||
|
||||
if filter_str:
|
||||
if not case_sensitive:
|
||||
f_str = filter_str.lower()
|
||||
return [p for p in profiles if f_str in p.lower()]
|
||||
else:
|
||||
return [p for p in profiles if filter_str in p]
|
||||
return profiles
|
||||
|
||||
def get_profile(self, name, resolve=True):
|
||||
"""Get the profile dictionary, optionally resolved."""
|
||||
profile = self.config.profiles.get(name)
|
||||
if not profile:
|
||||
raise ProfileNotFoundError(f"Profile '{name}' not found.")
|
||||
|
||||
if resolve:
|
||||
return self.resolve_node_data(profile)
|
||||
return profile
|
||||
|
||||
def add_profile(self, name, data):
|
||||
"""Add a new profile."""
|
||||
if name in self.config.profiles:
|
||||
raise ProfileAlreadyExistsError(f"Profile '{name}' already exists.")
|
||||
|
||||
# Filter data to match _profiles_add signature and ensure id is passed
|
||||
allowed_keys = {"host", "options", "logs", "password", "port", "protocol", "user", "tags", "jumphost"}
|
||||
filtered_data = {k: v for k, v in data.items() if k in allowed_keys}
|
||||
|
||||
self.config._profiles_add(id=name, **filtered_data)
|
||||
self.config._saveconfig(self.config.file)
|
||||
|
||||
def resolve_node_data(self, node_data):
|
||||
"""Resolve profile references (@profile) in node data and handle inheritance."""
|
||||
resolved = node_data.copy()
|
||||
|
||||
# 1. Identify all referenced profiles to support inheritance
|
||||
referenced_profiles = []
|
||||
for value in resolved.values():
|
||||
if isinstance(value, str) and value.startswith("@"):
|
||||
referenced_profiles.append(value[1:])
|
||||
elif isinstance(value, list):
|
||||
for item in value:
|
||||
if isinstance(item, str) and item.startswith("@"):
|
||||
referenced_profiles.append(item[1:])
|
||||
|
||||
# 2. Resolve explicit references
|
||||
for key, value in resolved.items():
|
||||
if isinstance(value, str) and value.startswith("@"):
|
||||
profile_name = value[1:]
|
||||
try:
|
||||
profile = self.get_profile(profile_name, resolve=True)
|
||||
resolved[key] = profile.get(key, "")
|
||||
except ProfileNotFoundError:
|
||||
resolved[key] = ""
|
||||
elif isinstance(value, list):
|
||||
resolved_list = []
|
||||
for item in value:
|
||||
if isinstance(item, str) and item.startswith("@"):
|
||||
profile_name = item[1:]
|
||||
try:
|
||||
profile = self.get_profile(profile_name, resolve=True)
|
||||
if "password" in profile:
|
||||
resolved_list.append(profile["password"])
|
||||
except ProfileNotFoundError:
|
||||
pass
|
||||
else:
|
||||
resolved_list.append(item)
|
||||
resolved[key] = resolved_list
|
||||
|
||||
# 3. Inheritance: Fill empty keys from the first referenced profile
|
||||
if referenced_profiles:
|
||||
base_profile_name = referenced_profiles[0]
|
||||
try:
|
||||
base_profile = self.get_profile(base_profile_name, resolve=True)
|
||||
for key, value in base_profile.items():
|
||||
# Fill if key is missing or empty
|
||||
if key not in resolved or resolved[key] == "" or resolved[key] == [] or resolved[key] is None:
|
||||
resolved[key] = value
|
||||
except ProfileNotFoundError:
|
||||
pass
|
||||
|
||||
# 4. Handle default protocol
|
||||
if resolved.get("protocol") == "" or resolved.get("protocol") is None:
|
||||
try:
|
||||
default_profile = self.get_profile("default", resolve=True)
|
||||
resolved["protocol"] = default_profile.get("protocol", "ssh")
|
||||
except ProfileNotFoundError:
|
||||
resolved["protocol"] = "ssh"
|
||||
|
||||
return resolved
|
||||
|
||||
def delete_profile(self, name):
|
||||
"""Delete an existing profile, with safety checks."""
|
||||
if name not in self.config.profiles:
|
||||
raise ProfileNotFoundError(f"Profile '{name}' not found.")
|
||||
|
||||
if name == "default":
|
||||
raise InvalidConfigurationError("Cannot delete the 'default' profile.")
|
||||
|
||||
used_by = self.config._profileused(name)
|
||||
if used_by:
|
||||
# We return the list of nodes using it so the UI can inform the user
|
||||
raise InvalidConfigurationError(f"Profile '{name}' is used by nodes: {', '.join(used_by)}")
|
||||
|
||||
self.config._profiles_del(id=name)
|
||||
self.config._saveconfig(self.config.file)
|
||||
|
||||
def update_profile(self, name, data):
|
||||
"""Update an existing profile."""
|
||||
if name not in self.config.profiles:
|
||||
raise ProfileNotFoundError(f"Profile '{name}' not found.")
|
||||
|
||||
# Merge with existing data
|
||||
existing = self.get_profile(name, resolve=False)
|
||||
updated_data = existing.copy()
|
||||
updated_data.update(data)
|
||||
|
||||
# Filter data to match _profiles_add signature
|
||||
allowed_keys = {"host", "options", "logs", "password", "port", "protocol", "user", "tags", "jumphost"}
|
||||
filtered_data = {k: v for k, v in updated_data.items() if k in allowed_keys}
|
||||
|
||||
self.config._profiles_add(id=name, **filtered_data)
|
||||
self.config._saveconfig(self.config.file)
|
||||
|
||||
@@ -1,71 +0,0 @@
|
||||
from .exceptions import InvalidConfigurationError
|
||||
|
||||
class RemoteStub:
|
||||
def __getattr__(self, name):
|
||||
raise NotImplementedError(
|
||||
"Remote mode (gRPC) is not yet available. "
|
||||
"Use local mode or wait for the gRPC implementation."
|
||||
)
|
||||
|
||||
class ServiceProvider:
|
||||
"""Dynamic service backend. Transparently provides local or remote services."""
|
||||
|
||||
def __init__(self, config, mode="local", remote_host=None):
|
||||
self.mode = mode
|
||||
self.config = config
|
||||
self.remote_host = remote_host
|
||||
|
||||
if mode == "local":
|
||||
self._init_local()
|
||||
elif mode == "remote":
|
||||
self._init_remote()
|
||||
else:
|
||||
raise ValueError(f"Unknown service mode: {mode}")
|
||||
|
||||
def _init_local(self):
|
||||
from .node_service import NodeService
|
||||
from .profile_service import ProfileService
|
||||
from .config_service import ConfigService
|
||||
from .plugin_service import PluginService
|
||||
from .ai_service import AIService
|
||||
from .system_service import SystemService
|
||||
from .execution_service import ExecutionService
|
||||
from .import_export_service import ImportExportService
|
||||
from .context_service import ContextService
|
||||
from .sync_service import SyncService
|
||||
|
||||
self.nodes = NodeService(self.config)
|
||||
self.profiles = ProfileService(self.config)
|
||||
self.config_svc = ConfigService(self.config)
|
||||
self.plugins = PluginService(self.config)
|
||||
self.ai = AIService(self.config)
|
||||
self.system = SystemService(self.config)
|
||||
self.execution = ExecutionService(self.config)
|
||||
self.import_export = ImportExportService(self.config)
|
||||
self.context = ContextService(self.config)
|
||||
self.sync = SyncService(self.config)
|
||||
|
||||
def _init_remote(self):
|
||||
# Allow ConfigService to work locally so the user can revert the mode
|
||||
from .config_service import ConfigService
|
||||
from .context_service import ContextService
|
||||
from .sync_service import SyncService
|
||||
self.config_svc = ConfigService(self.config)
|
||||
self.context = ContextService(self.config)
|
||||
self.sync = SyncService(self.config)
|
||||
|
||||
if not self.remote_host:
|
||||
raise InvalidConfigurationError("Remote host must be specified in remote mode")
|
||||
|
||||
import grpc
|
||||
from ..grpc_layer.stubs import NodeStub, ProfileStub, PluginStub, AIStub, ExecutionStub, ImportExportStub, SystemStub
|
||||
|
||||
channel = grpc.insecure_channel(self.remote_host)
|
||||
|
||||
self.nodes = NodeStub(channel, remote_host=self.remote_host, config=self.config)
|
||||
self.profiles = ProfileStub(channel, remote_host=self.remote_host, node_stub=self.nodes)
|
||||
self.plugins = PluginStub(channel, remote_host=self.remote_host)
|
||||
self.ai = AIStub(channel, remote_host=self.remote_host)
|
||||
self.system = SystemStub(channel, remote_host=self.remote_host)
|
||||
self.execution = ExecutionStub(channel, remote_host=self.remote_host)
|
||||
self.import_export = ImportExportStub(channel, remote_host=self.remote_host)
|
||||
@@ -1,389 +0,0 @@
|
||||
import os
|
||||
import time
|
||||
import zipfile
|
||||
import tempfile
|
||||
import io
|
||||
import yaml
|
||||
import threading
|
||||
from datetime import datetime
|
||||
from google.oauth2.credentials import Credentials
|
||||
from google.auth.transport.requests import Request
|
||||
from googleapiclient.discovery import build
|
||||
from google.auth.exceptions import RefreshError
|
||||
from google_auth_oauthlib.flow import InstalledAppFlow
|
||||
from googleapiclient.http import MediaFileUpload, MediaIoBaseDownload
|
||||
from googleapiclient.errors import HttpError
|
||||
|
||||
from .base import BaseService
|
||||
from .. import printer
|
||||
|
||||
class SyncService(BaseService):
|
||||
"""Business logic for Google Drive synchronization."""
|
||||
|
||||
def __init__(self, config):
|
||||
super().__init__(config)
|
||||
self.scopes = ['https://www.googleapis.com/auth/drive.appdata']
|
||||
self.token_file = os.path.join(self.config.defaultdir, "gtoken.json")
|
||||
|
||||
# Embedded OAuth config
|
||||
self.client_config = {
|
||||
"installed": {
|
||||
"client_id": "559598250648-cr189kfrga2il1a6d6nkaspq0a9pn5vv." + "apps.googleusercontent.com",
|
||||
"project_id": "celtic-surface-420323",
|
||||
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
|
||||
"token_uri": "https://oauth2.googleapis.com/token",
|
||||
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
|
||||
"client_secret": "GOCSPX-" + "VVfOSrJLPU90Pl0g7aAXM9GK2xPE",
|
||||
"redirect_uris": ["http://localhost"]
|
||||
}
|
||||
}
|
||||
|
||||
# Sync status from config
|
||||
self.sync_enabled = self.config.config.get("sync", False)
|
||||
self.sync_remote = self.config.config.get("sync_remote", False)
|
||||
|
||||
def login(self):
|
||||
"""Authenticate with Google Drive."""
|
||||
creds = None
|
||||
if os.path.exists(self.token_file):
|
||||
creds = Credentials.from_authorized_user_file(self.token_file, self.scopes)
|
||||
|
||||
try:
|
||||
if not creds or not creds.valid:
|
||||
if creds and creds.expired and creds.refresh_token:
|
||||
creds.refresh(Request())
|
||||
else:
|
||||
flow = InstalledAppFlow.from_client_config(self.client_config, self.scopes)
|
||||
creds = flow.run_local_server(port=0, access_type='offline')
|
||||
|
||||
with open(self.token_file, 'w') as token:
|
||||
token.write(creds.to_json())
|
||||
|
||||
printer.success("Logged in successfully.")
|
||||
return True
|
||||
|
||||
except RefreshError:
|
||||
if os.path.exists(self.token_file):
|
||||
os.remove(self.token_file)
|
||||
printer.warning("Existing token was invalid and has been removed. Please log in again.")
|
||||
return False
|
||||
except Exception as e:
|
||||
printer.error(f"Login failed: {e}")
|
||||
return False
|
||||
|
||||
def logout(self):
|
||||
"""Remove Google Drive credentials."""
|
||||
if os.path.exists(self.token_file):
|
||||
os.remove(self.token_file)
|
||||
printer.success("Logged out successfully.")
|
||||
else:
|
||||
printer.info("No credentials file found. Already logged out.")
|
||||
|
||||
def get_credentials(self):
|
||||
"""Get valid credentials, refreshing if necessary."""
|
||||
if os.path.exists(self.token_file):
|
||||
creds = Credentials.from_authorized_user_file(self.token_file, self.scopes)
|
||||
else:
|
||||
return None
|
||||
|
||||
if not creds or not creds.valid:
|
||||
if creds and creds.expired and creds.refresh_token:
|
||||
try:
|
||||
creds.refresh(Request())
|
||||
except RefreshError:
|
||||
return None
|
||||
else:
|
||||
return None
|
||||
return creds
|
||||
|
||||
def check_login_status(self):
|
||||
"""Check if logged in to Google Drive."""
|
||||
if os.path.exists(self.token_file):
|
||||
creds = Credentials.from_authorized_user_file(self.token_file)
|
||||
if creds and creds.expired and creds.refresh_token:
|
||||
try:
|
||||
creds.refresh(Request())
|
||||
except RefreshError:
|
||||
pass
|
||||
return True if creds.valid else "Invalid"
|
||||
return False
|
||||
|
||||
def list_backups(self):
|
||||
"""List files in Google Drive appDataFolder."""
|
||||
creds = self.get_credentials()
|
||||
if not creds:
|
||||
printer.error("Not logged in to Google Drive.")
|
||||
return []
|
||||
|
||||
try:
|
||||
service = build("drive", "v3", credentials=creds)
|
||||
response = service.files().list(
|
||||
spaces="appDataFolder",
|
||||
fields="files(id, name, appProperties)",
|
||||
pageSize=10,
|
||||
).execute()
|
||||
|
||||
files_info = []
|
||||
for file in response.get("files", []):
|
||||
files_info.append({
|
||||
"name": file.get("name"),
|
||||
"id": file.get("id"),
|
||||
"date": file.get("appProperties", {}).get("date"),
|
||||
"timestamp": file.get("appProperties", {}).get("timestamp")
|
||||
})
|
||||
return files_info
|
||||
except HttpError as error:
|
||||
printer.error(f"Google Drive API error: {error}")
|
||||
return []
|
||||
|
||||
def compress_and_upload(self, remote_data=None):
|
||||
"""Compress config and upload to Drive."""
|
||||
timestamp = int(time.time() * 1000)
|
||||
with tempfile.TemporaryDirectory() as tmp_dir:
|
||||
zip_path = os.path.join(tmp_dir, f"connpy-backup-{timestamp}.zip")
|
||||
|
||||
with zipfile.ZipFile(zip_path, 'w', zipfile.ZIP_DEFLATED) as zipf:
|
||||
# If we have remote data, we create a virtual config file
|
||||
if remote_data:
|
||||
config_tmp = os.path.join(tmp_dir, "config.yaml")
|
||||
with open(config_tmp, 'w') as f:
|
||||
yaml.dump(remote_data, f, default_flow_style=False)
|
||||
zipf.write(config_tmp, "config.yaml")
|
||||
else:
|
||||
# Legacy behavior: use local file
|
||||
zipf.write(self.config.file, os.path.basename(self.config.file))
|
||||
|
||||
# Always include the key if it exists
|
||||
if os.path.exists(self.config.key):
|
||||
zipf.write(self.config.key, ".osk")
|
||||
|
||||
# Manage retention (max 100 backups)
|
||||
backups = self.list_backups()
|
||||
if len(backups) >= 100:
|
||||
oldest = min(backups, key=lambda x: x['timestamp'] or '0')
|
||||
self.delete_backup(oldest['id'])
|
||||
|
||||
# Upload
|
||||
return self.upload_file(zip_path, timestamp)
|
||||
|
||||
def upload_file(self, file_path, timestamp):
|
||||
"""Internal method to upload to Drive."""
|
||||
creds = self.get_credentials()
|
||||
if not creds: return False
|
||||
|
||||
service = build('drive', 'v3', credentials=creds)
|
||||
date_str = datetime.fromtimestamp(timestamp/1000).strftime('%Y-%m-%d %H:%M:%S')
|
||||
|
||||
file_metadata = {
|
||||
'name': os.path.basename(file_path),
|
||||
'parents': ["appDataFolder"],
|
||||
'appProperties': {
|
||||
'timestamp': str(timestamp),
|
||||
'date': date_str
|
||||
}
|
||||
}
|
||||
media = MediaFileUpload(file_path)
|
||||
try:
|
||||
service.files().create(body=file_metadata, media_body=media, fields='id').execute()
|
||||
printer.success("Backup uploaded to Google Drive.")
|
||||
return True
|
||||
except Exception as e:
|
||||
printer.error(f"Upload failed: {e}")
|
||||
return False
|
||||
|
||||
def delete_backup(self, file_id):
|
||||
"""Delete a backup from Drive."""
|
||||
creds = self.get_credentials()
|
||||
if not creds: return False
|
||||
try:
|
||||
service = build("drive", "v3", credentials=creds)
|
||||
service.files().delete(fileId=file_id).execute()
|
||||
return True
|
||||
except Exception as e:
|
||||
printer.error(f"Delete failed: {e}")
|
||||
return False
|
||||
|
||||
def restore_backup(self, file_id=None, restore_config=True, restore_nodes=True, app_instance=None):
|
||||
"""Download and analyze a backup for restoration."""
|
||||
backups = self.list_backups()
|
||||
if not backups:
|
||||
printer.error("No backups found.")
|
||||
return None
|
||||
|
||||
if file_id:
|
||||
selected = next((f for f in backups if f['id'] == file_id), None)
|
||||
if not selected:
|
||||
printer.error(f"Backup {file_id} not found.")
|
||||
return None
|
||||
else:
|
||||
selected = max(backups, key=lambda x: x['timestamp'] or '0')
|
||||
|
||||
with tempfile.TemporaryDirectory() as tmp_dir:
|
||||
zip_path = os.path.join(tmp_dir, 'restore.zip')
|
||||
if self.download_file(selected['id'], zip_path):
|
||||
return self.perform_restore(zip_path, restore_config, restore_nodes, app_instance)
|
||||
return False
|
||||
|
||||
def download_file(self, file_id, dest):
|
||||
"""Internal method to download from Drive."""
|
||||
creds = self.get_credentials()
|
||||
if not creds: return False
|
||||
try:
|
||||
service = build('drive', 'v3', credentials=creds)
|
||||
request = service.files().get_media(fileId=file_id)
|
||||
with io.FileIO(dest, mode='wb') as fh:
|
||||
downloader = MediaIoBaseDownload(fh, request)
|
||||
done = False
|
||||
while not done:
|
||||
_, done = downloader.next_chunk()
|
||||
return True
|
||||
except Exception as e:
|
||||
printer.error(f"Download failed: {e}")
|
||||
return False
|
||||
|
||||
def perform_restore(self, zip_path, restore_config=True, restore_nodes=True, app_instance=None):
|
||||
"""Execute the actual restoration of files or remote nodes."""
|
||||
try:
|
||||
with zipfile.ZipFile(zip_path, 'r') as zipf:
|
||||
names = zipf.namelist()
|
||||
dest_dir = os.path.dirname(self.config.file)
|
||||
|
||||
# We need to read the config content from zip to decide what to do
|
||||
backup_data = {}
|
||||
config_filename = "config.yaml" if "config.yaml" in names else ("config.json" if "config.json" in names else None)
|
||||
|
||||
if config_filename:
|
||||
with zipf.open(config_filename) as f:
|
||||
backup_data = yaml.safe_load(f)
|
||||
|
||||
# 1. Restore Key (.osk) - Part of config identity
|
||||
if restore_config and ".osk" in names:
|
||||
zipf.extract(".osk", os.path.dirname(self.config.key))
|
||||
|
||||
# 2. Restore Config (Local Settings)
|
||||
if restore_config and backup_data:
|
||||
local_config = self.config.config.copy()
|
||||
|
||||
# Capture current connectivity settings to preserve them
|
||||
current_mode = local_config.get("service_mode", "local")
|
||||
current_remote = local_config.get("remote_host")
|
||||
|
||||
if "config" in backup_data:
|
||||
local_config.update(backup_data["config"])
|
||||
|
||||
# Restore connectivity settings - we don't want a restore to
|
||||
# accidentally switch us between local and remote and break connectivity
|
||||
local_config["service_mode"] = current_mode
|
||||
if current_remote:
|
||||
local_config["remote_host"] = current_remote
|
||||
|
||||
self.config.config = local_config
|
||||
self.config._saveconfig(self.config.file)
|
||||
|
||||
# 3. Restore Nodes and Profiles
|
||||
if restore_nodes and backup_data:
|
||||
connections = backup_data.get("connections", {})
|
||||
profiles = backup_data.get("profiles", {})
|
||||
|
||||
if app_instance and app_instance.services.mode == "remote":
|
||||
# Push to Remote via gRPC
|
||||
app_instance.services.nodes.full_replace(connections, profiles)
|
||||
else:
|
||||
# Restore to Local config file
|
||||
self.config.connections = connections
|
||||
self.config.profiles = profiles
|
||||
self.config._saveconfig(self.config.file)
|
||||
|
||||
# Clear caches
|
||||
for f in [self.config.cachefile, self.config.fzf_cachefile]:
|
||||
if os.path.exists(f): os.remove(f)
|
||||
|
||||
return True
|
||||
except Exception as e:
|
||||
printer.error(f"Restoration failed: {e}")
|
||||
return False
|
||||
|
||||
def analyze_backup_content(self, file_id=None):
|
||||
"""Analyze a backup without restoring to provide info for confirmation."""
|
||||
backups = self.list_backups()
|
||||
if not backups: return None
|
||||
selected = next((f for f in backups if f['id'] == file_id), None) if file_id else max(backups, key=lambda x: x['timestamp'] or '0')
|
||||
|
||||
with tempfile.TemporaryDirectory() as tmp_dir:
|
||||
zip_path = os.path.join(tmp_dir, 'analyze.zip')
|
||||
if self.download_file(selected['id'], zip_path):
|
||||
with zipfile.ZipFile(zip_path, 'r') as zipf:
|
||||
names = zipf.namelist()
|
||||
config_filename = "config.yaml" if "config.yaml" in names else ("config.json" if "config.json" in names else None)
|
||||
if config_filename:
|
||||
with zipf.open(config_filename) as f:
|
||||
data = yaml.safe_load(f)
|
||||
connections = data.get("connections", {})
|
||||
|
||||
# Accurate recursive count
|
||||
nodes_count = 0
|
||||
folders_count = 0
|
||||
|
||||
# Layer 1
|
||||
for k, v in connections.items():
|
||||
if isinstance(v, dict):
|
||||
if v.get("type") == "connection":
|
||||
nodes_count += 1
|
||||
elif v.get("type") == "folder":
|
||||
folders_count += 1
|
||||
# Layer 2
|
||||
for k2, v2 in v.items():
|
||||
if isinstance(v2, dict):
|
||||
if v2.get("type") == "connection":
|
||||
nodes_count += 1
|
||||
elif v2.get("type") == "subfolder":
|
||||
folders_count += 1
|
||||
# Layer 3
|
||||
for k3, v3 in v2.items():
|
||||
if isinstance(v3, dict) and v3.get("type") == "connection":
|
||||
nodes_count += 1
|
||||
|
||||
return {
|
||||
"nodes": nodes_count,
|
||||
"folders": folders_count,
|
||||
"profiles": len(data.get("profiles", {})),
|
||||
"has_config": "config" in data,
|
||||
"has_key": ".osk" in names
|
||||
}
|
||||
return None
|
||||
|
||||
def perform_sync(self, app_instance):
|
||||
"""Background sync logic."""
|
||||
# Always check current config state
|
||||
sync_enabled = self.config.config.get("sync", False)
|
||||
sync_remote = self.config.config.get("sync_remote", False)
|
||||
|
||||
if not sync_enabled: return
|
||||
|
||||
|
||||
if self.check_login_status() != True:
|
||||
printer.warning("Auto-sync: Not logged in to Google Drive.")
|
||||
return
|
||||
|
||||
remote_data = None
|
||||
if sync_remote and app_instance.services.mode == "remote":
|
||||
try:
|
||||
inventory = app_instance.services.nodes.get_inventory()
|
||||
# Merge with local settings
|
||||
local_settings = app_instance.services.config_svc.get_settings()
|
||||
local_settings.pop("configfolder", None)
|
||||
|
||||
# Maintain proper config structure: {config: {}, connections: {}, profiles: {}}
|
||||
remote_data = {
|
||||
"config": local_settings,
|
||||
"connections": inventory.get("connections", {}),
|
||||
"profiles": inventory.get("profiles", {})
|
||||
}
|
||||
except Exception as e:
|
||||
printer.warning(f"Could not fetch remote inventory for sync: {e}")
|
||||
|
||||
# Run in thread to not block CLI
|
||||
threading.Thread(
|
||||
target=self.compress_and_upload,
|
||||
args=(remote_data,)
|
||||
).start()
|
||||
@@ -1,87 +0,0 @@
|
||||
from .base import BaseService
|
||||
from .exceptions import ConnpyError
|
||||
|
||||
class SystemService(BaseService):
|
||||
"""Business logic for application lifecycle (API, processes)."""
|
||||
|
||||
def start_api(self, port=None):
|
||||
"""Start the Connpy REST API."""
|
||||
from connpy.api import start_api
|
||||
try:
|
||||
start_api(port, config=self.config)
|
||||
except Exception as e:
|
||||
raise ConnpyError(f"Failed to start API: {e}")
|
||||
|
||||
def debug_api(self, port=None):
|
||||
"""Start the Connpy REST API in debug mode."""
|
||||
from connpy.api import debug_api
|
||||
try:
|
||||
debug_api(port, config=self.config)
|
||||
except Exception as e:
|
||||
raise ConnpyError(f"Failed to start API in debug mode: {e}")
|
||||
|
||||
|
||||
def stop_api(self):
|
||||
"""Stop the Connpy REST API."""
|
||||
try:
|
||||
import os
|
||||
import signal
|
||||
|
||||
pids = ["/run/connpy.pid", "/tmp/connpy.pid"]
|
||||
stopped = False
|
||||
for pid_file in pids:
|
||||
if os.path.exists(pid_file):
|
||||
try:
|
||||
with open(pid_file, "r") as f:
|
||||
# Read only the first line (PID)
|
||||
line = f.readline().strip()
|
||||
if not line:
|
||||
continue
|
||||
pid = int(line)
|
||||
os.kill(pid, signal.SIGTERM)
|
||||
# Remove the PID file after successful kill
|
||||
os.remove(pid_file)
|
||||
stopped = True
|
||||
except (ValueError, OSError, ProcessLookupError):
|
||||
# If process is already dead, just remove the stale PID file
|
||||
try:
|
||||
os.remove(pid_file)
|
||||
except OSError:
|
||||
pass
|
||||
continue
|
||||
return stopped
|
||||
except Exception as e:
|
||||
raise ConnpyError(f"Failed to stop API: {e}")
|
||||
|
||||
def restart_api(self, port=None):
|
||||
"""Restart the Connpy REST API, maintaining the current port if none provided."""
|
||||
if port is None:
|
||||
status = self.get_api_status()
|
||||
if status["running"] and status.get("port"):
|
||||
port = status["port"]
|
||||
|
||||
self.stop_api()
|
||||
import time
|
||||
time.sleep(1)
|
||||
self.start_api(port)
|
||||
|
||||
def get_api_status(self):
|
||||
"""Check if the API is currently running."""
|
||||
import os
|
||||
pids = ["/run/connpy.pid", "/tmp/connpy.pid"]
|
||||
for pid_file in pids:
|
||||
if os.path.exists(pid_file):
|
||||
try:
|
||||
with open(pid_file, "r") as f:
|
||||
pid_line = f.readline().strip()
|
||||
port_line = f.readline().strip()
|
||||
if not pid_line:
|
||||
continue
|
||||
pid = int(pid_line)
|
||||
port = int(port_line) if port_line else None
|
||||
# Signal 0 checks for process existence without killing it
|
||||
os.kill(pid, 0)
|
||||
return {"running": True, "pid": pid, "port": port, "pid_file": pid_file}
|
||||
except (ValueError, OSError, ProcessLookupError):
|
||||
continue
|
||||
return {"running": False}
|
||||
@@ -1 +0,0 @@
|
||||
# Tests package
|
||||
@@ -1,193 +0,0 @@
|
||||
"""Shared fixtures for connpy tests.
|
||||
|
||||
All tests use tmp_path to create isolated config/keys.
|
||||
No test touches ~/.config/conn/
|
||||
"""
|
||||
import pytest
|
||||
import json
|
||||
import yaml
|
||||
import os
|
||||
from unittest.mock import patch, MagicMock
|
||||
from Crypto.PublicKey import RSA
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Minimal config data
|
||||
# ---------------------------------------------------------------------------
|
||||
DEFAULT_CONFIG = {
|
||||
"config": {"case": False, "idletime": 30, "fzf": False},
|
||||
"connections": {},
|
||||
"profiles": {
|
||||
"default": {
|
||||
"host": "", "protocol": "ssh", "port": "", "user": "",
|
||||
"password": "", "options": "", "logs": "", "tags": "", "jumphost": ""
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
SAMPLE_CONNECTIONS = {
|
||||
"router1": {
|
||||
"host": "10.0.0.1", "protocol": "ssh", "port": "22",
|
||||
"user": "admin", "password": "pass1", "options": "",
|
||||
"logs": "", "tags": "", "jumphost": "", "type": "connection"
|
||||
},
|
||||
"office": {
|
||||
"type": "folder",
|
||||
"server1": {
|
||||
"host": "10.0.1.1", "protocol": "ssh", "port": "",
|
||||
"user": "root", "password": "pass2", "options": "",
|
||||
"logs": "", "tags": "", "jumphost": "", "type": "connection"
|
||||
},
|
||||
"datacenter": {
|
||||
"type": "subfolder",
|
||||
"db1": {
|
||||
"host": "10.0.2.1", "protocol": "ssh", "port": "",
|
||||
"user": "dbadmin", "password": "pass3", "options": "",
|
||||
"logs": "", "tags": "", "jumphost": "", "type": "connection"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
SAMPLE_PROFILES = {
|
||||
"default": {
|
||||
"host": "", "protocol": "ssh", "port": "", "user": "",
|
||||
"password": "", "options": "", "logs": "", "tags": "", "jumphost": ""
|
||||
},
|
||||
"office-user": {
|
||||
"host": "", "protocol": "ssh", "port": "", "user": "officeadmin",
|
||||
"password": "officepass", "options": "", "logs": "", "tags": "", "jumphost": ""
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Fixtures
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
@pytest.fixture
|
||||
def tmp_config_dir(tmp_path):
|
||||
"""Create an isolated config directory with config.json and RSA key."""
|
||||
config_dir = tmp_path / ".config" / "conn"
|
||||
config_dir.mkdir(parents=True)
|
||||
plugins_dir = config_dir / "plugins"
|
||||
plugins_dir.mkdir()
|
||||
|
||||
# Write config.yaml
|
||||
config_file = config_dir / "config.yaml"
|
||||
config_file.write_text(yaml.dump(DEFAULT_CONFIG, default_flow_style=False, sort_keys=False))
|
||||
os.chmod(str(config_file), 0o600)
|
||||
|
||||
# Write .folder (points to itself)
|
||||
folder_file = config_dir / ".folder"
|
||||
folder_file.write_text(str(config_dir))
|
||||
|
||||
# Generate RSA key
|
||||
key = RSA.generate(2048)
|
||||
key_file = config_dir / ".osk"
|
||||
key_file.write_bytes(key.export_key("PEM"))
|
||||
os.chmod(str(key_file), 0o600)
|
||||
|
||||
return config_dir
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def config(tmp_config_dir):
|
||||
"""Create a configfile instance pointing to tmp directory."""
|
||||
from connpy.configfile import configfile
|
||||
conf_path = str(tmp_config_dir / "config.yaml")
|
||||
key_path = str(tmp_config_dir / ".osk")
|
||||
return configfile(conf=conf_path, key=key_path)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def populated_config(tmp_config_dir):
|
||||
"""Create a configfile with sample nodes/profiles pre-loaded."""
|
||||
config_file = tmp_config_dir / "config.yaml"
|
||||
data = {
|
||||
"config": {"case": False, "idletime": 30, "fzf": False},
|
||||
"connections": SAMPLE_CONNECTIONS,
|
||||
"profiles": SAMPLE_PROFILES
|
||||
}
|
||||
config_file.write_text(yaml.dump(data, default_flow_style=False, sort_keys=False))
|
||||
from connpy.configfile import configfile
|
||||
return configfile(conf=str(config_file), key=str(tmp_config_dir / ".osk"))
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_pexpect():
|
||||
"""Mock pexpect.spawn for connection tests."""
|
||||
with patch("connpy.core.pexpect") as mock_pexp:
|
||||
child = MagicMock()
|
||||
child.before = b""
|
||||
child.after = b"router#"
|
||||
child.readline.return_value = b""
|
||||
child.child_fd = 3
|
||||
mock_pexp.spawn.return_value = child
|
||||
mock_pexp.EOF = object()
|
||||
mock_pexp.TIMEOUT = object()
|
||||
|
||||
# Also mock fdpexpect
|
||||
with patch("connpy.core.fdpexpect", create=True) as mock_fd:
|
||||
mock_fd.fdspawn.return_value = MagicMock()
|
||||
yield {
|
||||
"pexpect": mock_pexp,
|
||||
"child": child,
|
||||
"fdpexpect": mock_fd
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_litellm():
|
||||
"""Mock litellm.completion for AI tests."""
|
||||
with patch("connpy.ai.completion") as mock_comp:
|
||||
# Create a default response
|
||||
msg = MagicMock()
|
||||
msg.content = "Test response from AI"
|
||||
msg.tool_calls = None
|
||||
msg.role = "assistant"
|
||||
msg.model_dump.return_value = {
|
||||
"role": "assistant",
|
||||
"content": "Test response from AI"
|
||||
}
|
||||
|
||||
choice = MagicMock()
|
||||
choice.message = msg
|
||||
|
||||
response = MagicMock()
|
||||
response.choices = [choice]
|
||||
response.usage = MagicMock()
|
||||
response.usage.prompt_tokens = 100
|
||||
response.usage.completion_tokens = 50
|
||||
response.usage.total_tokens = 150
|
||||
|
||||
mock_comp.return_value = response
|
||||
|
||||
yield {
|
||||
"completion": mock_comp,
|
||||
"response": response,
|
||||
"message": msg,
|
||||
"choice": choice
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def ai_config(tmp_config_dir):
|
||||
"""Create a configfile with AI keys configured for AI tests."""
|
||||
config_file = tmp_config_dir / "config.yaml"
|
||||
data = {
|
||||
"config": {
|
||||
"case": False, "idletime": 30, "fzf": False,
|
||||
"ai": {
|
||||
"engineer_model": "test/test-model",
|
||||
"engineer_api_key": "test-engineer-key",
|
||||
"architect_model": "test/test-architect",
|
||||
"architect_api_key": "test-architect-key"
|
||||
}
|
||||
},
|
||||
"connections": SAMPLE_CONNECTIONS,
|
||||
"profiles": SAMPLE_PROFILES
|
||||
}
|
||||
config_file.write_text(yaml.dump(data, default_flow_style=False, sort_keys=False))
|
||||
from connpy.configfile import configfile
|
||||
return configfile(conf=str(config_file), key=str(tmp_config_dir / ".osk"))
|
||||
@@ -1,483 +0,0 @@
|
||||
"""Tests for connpy.ai module."""
|
||||
import json
|
||||
import os
|
||||
import pytest
|
||||
from unittest.mock import patch, MagicMock
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# AI Init tests
|
||||
# =========================================================================
|
||||
|
||||
class TestAIInit:
|
||||
def test_init_with_keys(self, ai_config, mock_litellm):
|
||||
"""Initializes correctly when keys are configured."""
|
||||
from connpy.ai import ai
|
||||
myai = ai(ai_config)
|
||||
assert myai.engineer_model == "test/test-model"
|
||||
assert myai.architect_model == "test/test-architect"
|
||||
|
||||
def test_ask_missing_engineer_key(self, config):
|
||||
"""Raises ValueError if engineer key is missing when asking."""
|
||||
from connpy.ai import ai
|
||||
myai = ai(config)
|
||||
with pytest.raises(ValueError) as exc:
|
||||
myai.ask("hello")
|
||||
assert "Engineer API key not configured" in str(exc.value)
|
||||
|
||||
def test_init_missing_architect_key_warns(self, ai_config, capsys, mock_litellm):
|
||||
"""Warns if architect key is missing but doesn't crash."""
|
||||
# Remove architect key
|
||||
ai_config.config["ai"]["architect_api_key"] = None
|
||||
from connpy.ai import ai
|
||||
# Should not raise
|
||||
myai = ai(ai_config)
|
||||
assert myai.architect_key is None
|
||||
|
||||
def test_default_models(self, config):
|
||||
"""Default models are set correctly when not configured."""
|
||||
config.config["ai"] = {"engineer_api_key": "test-key", "architect_api_key": "test-key"}
|
||||
from connpy.ai import ai
|
||||
myai = ai(config)
|
||||
assert "gemini" in myai.engineer_model.lower()
|
||||
assert "claude" in myai.architect_model.lower() or "anthropic" in myai.architect_model.lower()
|
||||
|
||||
def test_init_loads_memory(self, ai_config, tmp_path, mock_litellm):
|
||||
"""Loads long-term memory from file if it exists."""
|
||||
memory_path = os.path.join(ai_config.defaultdir, "ai_memory.md")
|
||||
from connpy.ai import ai
|
||||
|
||||
with patch("os.path.exists", side_effect=lambda p: True if p == memory_path else os.path.exists(p)):
|
||||
with patch("builtins.open", side_effect=lambda f, *a, **kw: (
|
||||
__import__("io").StringIO("## Memory\nRouter1 is border router")
|
||||
if f == memory_path else open(f, *a, **kw)
|
||||
)):
|
||||
try:
|
||||
myai = ai(ai_config)
|
||||
except Exception:
|
||||
pass # May fail on other file opens, that's ok
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# register_ai_tool tests
|
||||
# =========================================================================
|
||||
|
||||
class TestRegisterAITool:
|
||||
@pytest.fixture
|
||||
def myai(self, ai_config, mock_litellm):
|
||||
from connpy.ai import ai
|
||||
return ai(ai_config)
|
||||
|
||||
def _make_tool_def(self, name="my_tool"):
|
||||
return {
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": name,
|
||||
"description": "Test tool",
|
||||
"parameters": {"type": "object", "properties": {}}
|
||||
}
|
||||
}
|
||||
|
||||
def test_register_tool_engineer(self, myai):
|
||||
tool_def = self._make_tool_def()
|
||||
myai.register_ai_tool(tool_def, lambda self, **kw: "ok", target="engineer")
|
||||
assert len(myai.external_engineer_tools) == 1
|
||||
assert len(myai.external_architect_tools) == 0
|
||||
|
||||
def test_register_tool_architect(self, myai):
|
||||
tool_def = self._make_tool_def()
|
||||
myai.register_ai_tool(tool_def, lambda self, **kw: "ok", target="architect")
|
||||
assert len(myai.external_architect_tools) == 1
|
||||
assert len(myai.external_engineer_tools) == 0
|
||||
|
||||
def test_register_tool_both(self, myai):
|
||||
tool_def = self._make_tool_def()
|
||||
myai.register_ai_tool(tool_def, lambda self, **kw: "ok", target="both")
|
||||
assert len(myai.external_engineer_tools) == 1
|
||||
assert len(myai.external_architect_tools) == 1
|
||||
|
||||
def test_register_tool_handler(self, myai):
|
||||
tool_def = self._make_tool_def("custom_tool")
|
||||
handler = lambda self, **kw: "result"
|
||||
myai.register_ai_tool(tool_def, handler)
|
||||
assert "custom_tool" in myai.external_tool_handlers
|
||||
assert myai.external_tool_handlers["custom_tool"] is handler
|
||||
|
||||
def test_register_tool_prompt_extension(self, myai):
|
||||
tool_def = self._make_tool_def()
|
||||
myai.register_ai_tool(
|
||||
tool_def, lambda self, **kw: "ok",
|
||||
engineer_prompt="- Custom capability",
|
||||
architect_prompt=" * Custom tool"
|
||||
)
|
||||
assert any("Custom capability" in ext for ext in myai.engineer_prompt_extensions)
|
||||
assert any("Custom tool" in ext for ext in myai.architect_prompt_extensions)
|
||||
|
||||
def test_register_tool_status_formatter(self, myai):
|
||||
tool_def = self._make_tool_def("status_tool")
|
||||
formatter = lambda args: f"[STATUS] {args}"
|
||||
myai.register_ai_tool(tool_def, lambda self, **kw: "ok", status_formatter=formatter)
|
||||
assert "status_tool" in myai.tool_status_formatters
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# Dynamic prompts tests
|
||||
# =========================================================================
|
||||
|
||||
class TestDynamicPrompts:
|
||||
@pytest.fixture
|
||||
def myai(self, ai_config, mock_litellm):
|
||||
from connpy.ai import ai
|
||||
return ai(ai_config)
|
||||
|
||||
def test_engineer_prompt_without_extensions(self, myai):
|
||||
prompt = myai.engineer_system_prompt
|
||||
assert "Plugin Capabilities" not in prompt
|
||||
assert "TECHNICAL EXECUTION ENGINE" in prompt
|
||||
|
||||
def test_engineer_prompt_with_extensions(self, myai):
|
||||
myai.engineer_prompt_extensions.append("- AWS Cloud Auditing")
|
||||
prompt = myai.engineer_system_prompt
|
||||
assert "Plugin Capabilities" in prompt
|
||||
assert "AWS Cloud Auditing" in prompt
|
||||
|
||||
def test_architect_prompt_without_extensions(self, myai):
|
||||
prompt = myai.architect_system_prompt
|
||||
assert "Plugin Capabilities" not in prompt
|
||||
assert "STRATEGIC REASONING ENGINE" in prompt
|
||||
|
||||
def test_architect_prompt_with_extensions(self, myai):
|
||||
myai.architect_prompt_extensions.append(" * Custom tool available")
|
||||
prompt = myai.architect_system_prompt
|
||||
assert "Plugin Capabilities" in prompt
|
||||
assert "Custom tool available" in prompt
|
||||
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# _sanitize_messages tests
|
||||
# =========================================================================
|
||||
|
||||
class TestSanitizeMessages:
|
||||
@pytest.fixture
|
||||
def myai(self, ai_config, mock_litellm):
|
||||
from connpy.ai import ai
|
||||
return ai(ai_config)
|
||||
|
||||
def test_sanitize_empty(self, myai):
|
||||
assert myai._sanitize_messages([]) == []
|
||||
|
||||
def test_sanitize_normal_messages(self, myai):
|
||||
messages = [
|
||||
{"role": "system", "content": "You are helpful"},
|
||||
{"role": "user", "content": "Hello"},
|
||||
{"role": "assistant", "content": "Hi there"}
|
||||
]
|
||||
result = myai._sanitize_messages(messages)
|
||||
assert len(result) == 3
|
||||
|
||||
def test_sanitize_removes_orphan_tool_calls(self, myai):
|
||||
"""Tool calls at the end without responses are removed."""
|
||||
messages = [
|
||||
{"role": "user", "content": "do something"},
|
||||
{"role": "assistant", "content": None, "tool_calls": [
|
||||
{"id": "tc1", "function": {"name": "list_nodes", "arguments": "{}"}}
|
||||
]}
|
||||
# No tool response follows!
|
||||
]
|
||||
result = myai._sanitize_messages(messages)
|
||||
assert len(result) == 1 # Only user message
|
||||
assert result[0]["role"] == "user"
|
||||
|
||||
def test_sanitize_removes_orphan_tool_responses(self, myai):
|
||||
"""Tool responses without preceding tool_calls are removed."""
|
||||
messages = [
|
||||
{"role": "user", "content": "hello"},
|
||||
{"role": "tool", "tool_call_id": "tc1", "name": "list_nodes", "content": "[]"}
|
||||
]
|
||||
result = myai._sanitize_messages(messages)
|
||||
assert len(result) == 1
|
||||
assert result[0]["role"] == "user"
|
||||
|
||||
def test_sanitize_preserves_valid_tool_pairs(self, myai):
|
||||
"""Valid assistant+tool_calls followed by tool responses are preserved."""
|
||||
messages = [
|
||||
{"role": "user", "content": "list nodes"},
|
||||
{"role": "assistant", "content": None, "tool_calls": [
|
||||
{"id": "tc1", "function": {"name": "list_nodes", "arguments": "{}"}}
|
||||
]},
|
||||
{"role": "tool", "tool_call_id": "tc1", "name": "list_nodes", "content": "[\"r1\"]"},
|
||||
{"role": "assistant", "content": "Found r1"}
|
||||
]
|
||||
result = myai._sanitize_messages(messages)
|
||||
assert len(result) == 4
|
||||
|
||||
def test_sanitize_strips_cache_control(self, myai):
|
||||
"""_sanitize_messages should convert list-based content (with cache_control) back to strings."""
|
||||
messages = [
|
||||
{"role": "system", "content": [{"type": "text", "text": "system prompt", "cache_control": {"type": "ephemeral"}}]},
|
||||
{"role": "user", "content": "hello"}
|
||||
]
|
||||
result = myai._sanitize_messages(messages)
|
||||
assert result[0]["role"] == "system"
|
||||
assert isinstance(result[0]["content"], str)
|
||||
assert result[0]["content"] == "system prompt"
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# _truncate tests
|
||||
# =========================================================================
|
||||
|
||||
class TestTruncate:
|
||||
@pytest.fixture
|
||||
def myai(self, ai_config, mock_litellm):
|
||||
from connpy.ai import ai
|
||||
return ai(ai_config)
|
||||
|
||||
def test_truncate_short_text(self, myai):
|
||||
text = "short text"
|
||||
assert myai._truncate(text) == text
|
||||
|
||||
def test_truncate_long_text(self, myai):
|
||||
text = "x" * 100000
|
||||
result = myai._truncate(text)
|
||||
assert len(result) < 100000
|
||||
assert "[... OUTPUT TRUNCATED ...]" in result
|
||||
|
||||
def test_truncate_custom_limit(self, myai):
|
||||
text = "x" * 1000
|
||||
result = myai._truncate(text, limit=500)
|
||||
assert len(result) < 1000
|
||||
assert "[... OUTPUT TRUNCATED ...]" in result
|
||||
|
||||
def test_truncate_preserves_head_and_tail(self, myai):
|
||||
text = "HEAD" + "x" * 100000 + "TAIL"
|
||||
result = myai._truncate(text)
|
||||
assert result.startswith("HEAD")
|
||||
assert result.endswith("TAIL")
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# Tool methods tests
|
||||
# =========================================================================
|
||||
|
||||
class TestToolMethods:
|
||||
@pytest.fixture
|
||||
def myai(self, ai_config, mock_litellm):
|
||||
from connpy.ai import ai
|
||||
return ai(ai_config)
|
||||
|
||||
def test_list_nodes_tool_found(self, myai):
|
||||
result = myai.list_nodes_tool("router.*")
|
||||
parsed = json.loads(result) if isinstance(result, str) else result
|
||||
assert "router1" in str(parsed)
|
||||
|
||||
def test_list_nodes_tool_not_found(self, myai):
|
||||
result = myai.list_nodes_tool("nonexistent_pattern_xyz")
|
||||
assert "No nodes found" in str(result)
|
||||
|
||||
def test_get_node_info_masks_password(self, myai):
|
||||
result = myai.get_node_info_tool("router1")
|
||||
parsed = json.loads(result) if isinstance(result, str) else result
|
||||
assert parsed["password"] == "***"
|
||||
|
||||
def test_is_safe_command_show(self, myai):
|
||||
assert myai._is_safe_command("show running-config") == True
|
||||
assert myai._is_safe_command("show ip int brief") == True
|
||||
|
||||
def test_is_safe_command_config(self, myai):
|
||||
assert myai._is_safe_command("config t") == False
|
||||
assert myai._is_safe_command("write memory") == False
|
||||
|
||||
def test_is_safe_command_ls(self, myai):
|
||||
assert myai._is_safe_command("ls -la") == True
|
||||
|
||||
def test_is_safe_command_ping(self, myai):
|
||||
assert myai._is_safe_command("ping 10.0.0.1") == True
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# manage_memory_tool tests
|
||||
# =========================================================================
|
||||
|
||||
class TestManageMemory:
|
||||
@pytest.fixture
|
||||
def myai(self, ai_config, mock_litellm, tmp_path):
|
||||
from connpy.ai import ai
|
||||
myai = ai(ai_config)
|
||||
myai.memory_path = str(tmp_path / "ai_memory.md")
|
||||
return myai
|
||||
|
||||
def test_manage_memory_append(self, myai):
|
||||
result = myai.manage_memory_tool("Router1 is border router", action="append")
|
||||
assert "successfully" in result.lower()
|
||||
assert os.path.exists(myai.memory_path)
|
||||
content = open(myai.memory_path).read()
|
||||
assert "Router1 is border router" in content
|
||||
|
||||
def test_manage_memory_replace(self, myai):
|
||||
myai.manage_memory_tool("old content", action="append")
|
||||
myai.manage_memory_tool("new content only", action="replace")
|
||||
content = open(myai.memory_path).read()
|
||||
assert "new content only" in content
|
||||
assert "old content" not in content
|
||||
|
||||
def test_manage_memory_empty_content(self, myai):
|
||||
result = myai.manage_memory_tool("", action="append")
|
||||
assert "error" in result.lower() or "Error" in result
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# ask() with mock LLM tests
|
||||
# =========================================================================
|
||||
|
||||
class TestAsk:
|
||||
@pytest.fixture
|
||||
def myai(self, ai_config, mock_litellm):
|
||||
from connpy.ai import ai
|
||||
return ai(ai_config)
|
||||
|
||||
def test_ask_basic_response(self, myai, mock_litellm):
|
||||
result = myai.ask("hello", stream=False)
|
||||
assert "response" in result
|
||||
assert "chat_history" in result
|
||||
assert "usage" in result
|
||||
assert result["response"] == "Test response from AI"
|
||||
|
||||
def test_ask_sticky_brain_engineer(self, myai, mock_litellm):
|
||||
result = myai.ask("show me the routers", stream=False)
|
||||
assert result["responder"] == "engineer"
|
||||
|
||||
def test_ask_explicit_architect(self, myai, mock_litellm):
|
||||
result = myai.ask("architect: review the network design", stream=False)
|
||||
assert result["responder"] == "architect"
|
||||
|
||||
def test_ask_returns_usage(self, myai, mock_litellm):
|
||||
result = myai.ask("test", stream=False)
|
||||
assert result["usage"]["total"] > 0
|
||||
|
||||
def test_ask_with_chat_history(self, myai, mock_litellm):
|
||||
history = [
|
||||
{"role": "user", "content": "previous question"},
|
||||
{"role": "assistant", "content": "previous answer"}
|
||||
]
|
||||
result = myai.ask("follow up", chat_history=history, stream=False)
|
||||
assert result["response"] is not None
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# _get_engineer_tools / _get_architect_tools tests
|
||||
# =========================================================================
|
||||
|
||||
class TestToolDefinitions:
|
||||
@pytest.fixture
|
||||
def myai(self, ai_config, mock_litellm):
|
||||
from connpy.ai import ai
|
||||
return ai(ai_config)
|
||||
|
||||
def test_engineer_tools_include_core(self, myai):
|
||||
tools = myai._get_engineer_tools()
|
||||
names = [t["function"]["name"] for t in tools]
|
||||
assert "list_nodes" in names
|
||||
assert "run_commands" in names
|
||||
assert "get_node_info" in names
|
||||
assert "consult_architect" in names
|
||||
assert "escalate_to_architect" in names
|
||||
|
||||
def test_engineer_tools_include_external(self, myai):
|
||||
myai.external_engineer_tools.append({
|
||||
"type": "function",
|
||||
"function": {"name": "custom_tool", "description": "test", "parameters": {}}
|
||||
})
|
||||
tools = myai._get_engineer_tools()
|
||||
names = [t["function"]["name"] for t in tools]
|
||||
assert "custom_tool" in names
|
||||
|
||||
def test_architect_tools_include_core(self, myai):
|
||||
tools = myai._get_architect_tools()
|
||||
names = [t["function"]["name"] for t in tools]
|
||||
assert "delegate_to_engineer" in names
|
||||
assert "return_to_engineer" in names
|
||||
assert "manage_memory_tool" in names
|
||||
|
||||
def test_architect_tools_include_external(self, myai):
|
||||
myai.external_architect_tools.append({
|
||||
"type": "function",
|
||||
"function": {"name": "arch_tool", "description": "test", "parameters": {}}
|
||||
})
|
||||
tools = myai._get_architect_tools()
|
||||
names = [t["function"]["name"] for t in tools]
|
||||
assert "arch_tool" in names
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# AI Session Management tests
|
||||
# =========================================================================
|
||||
|
||||
class TestAISessions:
|
||||
@pytest.fixture
|
||||
def myai(self, ai_config, mock_litellm, tmp_path):
|
||||
from connpy.ai import ai
|
||||
ai_config.defaultdir = str(tmp_path)
|
||||
return ai(ai_config)
|
||||
|
||||
def test_sessions_dir_initialization(self, myai, tmp_path):
|
||||
assert os.path.exists(os.path.join(tmp_path, "ai_sessions"))
|
||||
assert myai.sessions_dir == str(tmp_path / "ai_sessions")
|
||||
|
||||
def test_generate_session_id(self, myai):
|
||||
session_id = myai._generate_session_id("Any query")
|
||||
# Format: YYYYMMDD-HHMMSS
|
||||
assert len(session_id) == 15
|
||||
assert "-" in session_id
|
||||
parts = session_id.split("-")
|
||||
assert len(parts[0]) == 8 # YYYYMMDD
|
||||
assert len(parts[1]) == 6 # HHMMSS
|
||||
|
||||
def test_save_and_load_session(self, myai):
|
||||
history = [
|
||||
{"role": "user", "content": "Hello"},
|
||||
{"role": "assistant", "content": "Hi"}
|
||||
]
|
||||
myai.save_session(history, title="Test Session")
|
||||
session_id = myai.session_id
|
||||
|
||||
# Load it back
|
||||
loaded = myai.load_session_data(session_id)
|
||||
assert loaded["title"] == "Test Session"
|
||||
assert loaded["history"] == history
|
||||
assert loaded["model"] == myai.engineer_model
|
||||
|
||||
def test_list_sessions(self, myai, capsys):
|
||||
history = [{"role": "user", "content": "Query 1"}]
|
||||
myai.save_session(history, title="Session 1")
|
||||
|
||||
# Use a second instance to list
|
||||
myai.list_sessions()
|
||||
captured = capsys.readouterr()
|
||||
assert "Session 1" in captured.out
|
||||
assert "AI Persisted Sessions" in captured.out
|
||||
|
||||
def test_get_last_session_id(self, myai):
|
||||
# Save two sessions
|
||||
myai.session_id = None # Force new
|
||||
myai.save_session([{"role": "user", "content": "First"}])
|
||||
first_id = myai.session_id
|
||||
import time
|
||||
time.sleep(1.1) # Ensure different timestamp
|
||||
|
||||
myai.session_id = None # Force new
|
||||
myai.save_session([{"role": "user", "content": "Second"}])
|
||||
second_id = myai.session_id
|
||||
|
||||
last_id = myai.get_last_session_id()
|
||||
assert last_id == second_id
|
||||
assert last_id != first_id
|
||||
|
||||
def test_delete_session(self, myai):
|
||||
myai.save_session([{"role": "user", "content": "To be deleted"}])
|
||||
session_id = myai.session_id
|
||||
assert os.path.exists(myai.session_path)
|
||||
|
||||
myai.delete_session(session_id)
|
||||
assert not os.path.exists(myai.session_path)
|
||||
@@ -1,56 +0,0 @@
|
||||
"""Tests for connpy.core_plugins.capture"""
|
||||
import pytest
|
||||
from unittest.mock import MagicMock, patch
|
||||
from connpy.core_plugins.capture import Entrypoint
|
||||
|
||||
@pytest.fixture
|
||||
def RemoteCapture():
|
||||
return Entrypoint.get_remote_capture_class()
|
||||
|
||||
@pytest.fixture
|
||||
def mock_connapp():
|
||||
app = MagicMock()
|
||||
app.services.nodes.list_nodes.return_value = ["test_node"]
|
||||
app.services.nodes.get_node_details.return_value = {"host": "127.0.0.1", "protocol": "ssh"}
|
||||
app.services.config_svc.get_settings().get.return_value = "/fake/ws"
|
||||
|
||||
mock_node = MagicMock()
|
||||
mock_node.protocol = "ssh"
|
||||
mock_node.unique = "test_node"
|
||||
app.node.return_value = mock_node
|
||||
return app
|
||||
|
||||
class TestRemoteCapture:
|
||||
def test_init_node_not_found(self, mock_connapp, RemoteCapture):
|
||||
# Attempt to capture a node not in inventory
|
||||
mock_connapp.services.nodes.list_nodes.return_value = []
|
||||
with pytest.raises(SystemExit) as exc:
|
||||
RemoteCapture(mock_connapp, "test_node", "eth0")
|
||||
assert exc.value.code == 2
|
||||
|
||||
def test_init_success(self, mock_connapp, RemoteCapture):
|
||||
rc = RemoteCapture(mock_connapp, "test_node", "eth0")
|
||||
assert rc.node_name == "test_node"
|
||||
assert rc.interface == "eth0"
|
||||
assert rc.wireshark_path == "/fake/ws"
|
||||
|
||||
def test_is_port_in_use(self, mock_connapp, RemoteCapture):
|
||||
rc = RemoteCapture(mock_connapp, "test_node", "eth0")
|
||||
with patch("socket.socket") as mock_socket:
|
||||
mock_sock_instance = MagicMock()
|
||||
mock_socket.return_value.__enter__.return_value = mock_sock_instance
|
||||
|
||||
mock_sock_instance.connect_ex.return_value = 0
|
||||
assert rc._is_port_in_use(8080) is True
|
||||
|
||||
mock_sock_instance.connect_ex.return_value = 1
|
||||
assert rc._is_port_in_use(8080) is False
|
||||
|
||||
def test_find_free_port(self, mock_connapp, RemoteCapture):
|
||||
rc = RemoteCapture(mock_connapp, "test_node", "eth0")
|
||||
with patch.object(RemoteCapture, "_is_port_in_use") as mock_is_in_use:
|
||||
# First 2 ports in use, 3rd is free
|
||||
mock_is_in_use.side_effect = [True, True, False]
|
||||
port = rc._find_free_port(20000, 30000)
|
||||
assert 20000 <= port <= 30000
|
||||
assert mock_is_in_use.call_count == 3
|
||||
@@ -1,68 +0,0 @@
|
||||
"""Tests for connpy.completion module."""
|
||||
import os
|
||||
import json
|
||||
import pytest
|
||||
from connpy.completion import load_txt_cache, get_cwd
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# load_txt_cache tests
|
||||
# =========================================================================
|
||||
|
||||
class TestLoadTxtCache:
|
||||
def test_load_existing_cache(self, tmp_path):
|
||||
"""Loads lines from a file correctly."""
|
||||
cache_file = tmp_path / "cache.txt"
|
||||
cache_file.write_text("node1\nnode2\nnode3@folder")
|
||||
|
||||
result = load_txt_cache(str(cache_file))
|
||||
assert result == ["node1", "node2", "node3@folder"]
|
||||
|
||||
def test_load_nonexistent_cache(self, tmp_path):
|
||||
"""Returns empty list if file is missing."""
|
||||
result = load_txt_cache(str(tmp_path / "missing.txt"))
|
||||
assert result == []
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# get_cwd tests
|
||||
# =========================================================================
|
||||
|
||||
class TestGetCwd:
|
||||
def test_current_dir(self, tmp_path, monkeypatch):
|
||||
"""Lists files in current directory."""
|
||||
monkeypatch.chdir(tmp_path)
|
||||
(tmp_path / "file1.txt").touch()
|
||||
(tmp_path / "file2.py").touch()
|
||||
subdir = tmp_path / "subdir"
|
||||
subdir.mkdir()
|
||||
|
||||
result = get_cwd(["run", "run"])
|
||||
# Should list files
|
||||
assert any("file1.txt" in r for r in result)
|
||||
assert any("subdir/" in r for r in result)
|
||||
|
||||
def test_specific_path(self, tmp_path, monkeypatch):
|
||||
"""Lists files matching a partial path."""
|
||||
monkeypatch.chdir(tmp_path)
|
||||
(tmp_path / "script.yaml").touch()
|
||||
(tmp_path / "script2.yaml").touch()
|
||||
|
||||
result = get_cwd(["run", "script"])
|
||||
assert any("script" in r for r in result)
|
||||
|
||||
def test_folder_only(self, tmp_path, monkeypatch):
|
||||
"""folderonly=True returns only directories."""
|
||||
monkeypatch.chdir(tmp_path)
|
||||
(tmp_path / "file.txt").touch()
|
||||
subdir = tmp_path / "mydir"
|
||||
subdir.mkdir()
|
||||
|
||||
result = get_cwd(["export", "export"], folderonly=True)
|
||||
files_in_result = [r for r in result if "file.txt" in r]
|
||||
assert len(files_in_result) == 0
|
||||
dirs_in_result = [r for r in result if "mydir" in r]
|
||||
assert len(dirs_in_result) > 0
|
||||
|
||||
|
||||
|
||||
@@ -1,585 +0,0 @@
|
||||
"""Tests for connpy.configfile module."""
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import pytest
|
||||
import yaml
|
||||
from copy import deepcopy
|
||||
|
||||
|
||||
class TestConfigfileInit:
|
||||
def test_creates_default_config(self, tmp_config_dir):
|
||||
"""Creates config.yaml with defaults when it doesn't exist."""
|
||||
config_file = tmp_config_dir / "config.yaml"
|
||||
config_file.unlink(missing_ok=True) # Remove existing
|
||||
key_file = tmp_config_dir / ".osk"
|
||||
|
||||
from connpy.configfile import configfile
|
||||
conf = configfile(conf=str(config_file), key=str(key_file))
|
||||
|
||||
assert config_file.exists()
|
||||
assert conf.config["case"] == False
|
||||
assert conf.config["idletime"] == 30
|
||||
assert "default" in conf.profiles
|
||||
|
||||
def test_creates_rsa_key(self, tmp_config_dir):
|
||||
"""Generates RSA key when it doesn't exist."""
|
||||
key_file = tmp_config_dir / ".osk"
|
||||
key_file.unlink() # Remove existing
|
||||
|
||||
from connpy.configfile import configfile
|
||||
conf = configfile(conf=str(tmp_config_dir / "config.yaml"), key=str(key_file))
|
||||
|
||||
assert key_file.exists()
|
||||
assert conf.privatekey is not None
|
||||
assert conf.publickey is not None
|
||||
|
||||
def test_loads_existing_config(self, config):
|
||||
"""Loads correctly from existing config."""
|
||||
assert config.config is not None
|
||||
assert config.connections is not None
|
||||
assert config.profiles is not None
|
||||
|
||||
def test_config_file_permissions(self, tmp_config_dir):
|
||||
"""Config is created with 0o600 permissions."""
|
||||
config_file = tmp_config_dir / "config.yaml"
|
||||
config_file.unlink(missing_ok=True)
|
||||
|
||||
from connpy.configfile import configfile
|
||||
configfile(conf=str(config_file), key=str(tmp_config_dir / ".osk"))
|
||||
|
||||
stat = os.stat(str(config_file))
|
||||
assert oct(stat.st_mode & 0o777) == oct(0o600)
|
||||
|
||||
def test_custom_paths(self, tmp_path):
|
||||
"""Accepts custom paths for conf and key."""
|
||||
config_dir = tmp_path / "custom"
|
||||
config_dir.mkdir()
|
||||
(config_dir / "plugins").mkdir()
|
||||
|
||||
# Write .folder for the config dir
|
||||
dot_folder = tmp_path / ".config" / "conn"
|
||||
dot_folder.mkdir(parents=True, exist_ok=True)
|
||||
(dot_folder / ".folder").write_text(str(config_dir))
|
||||
(dot_folder / "plugins").mkdir(exist_ok=True)
|
||||
|
||||
conf_path = str(config_dir / "my_config.yaml")
|
||||
key_path = str(config_dir / "my_key")
|
||||
|
||||
from connpy.configfile import configfile
|
||||
conf = configfile(conf=conf_path, key=key_path)
|
||||
|
||||
assert conf.file == conf_path
|
||||
assert conf.key == key_path
|
||||
|
||||
|
||||
class TestEncryption:
|
||||
def test_encrypt_password(self, config):
|
||||
"""Encrypts and produces b'...' format."""
|
||||
encrypted = config.encrypt("mysecret")
|
||||
assert encrypted.startswith("b'") or encrypted.startswith('b"')
|
||||
|
||||
def test_encrypt_decrypt_roundtrip(self, config):
|
||||
"""Encrypt then decrypt returns original."""
|
||||
from Crypto.PublicKey import RSA
|
||||
from Crypto.Cipher import PKCS1_OAEP
|
||||
import ast
|
||||
|
||||
original = "super_secret_password"
|
||||
encrypted = config.encrypt(original)
|
||||
|
||||
# Decrypt
|
||||
with open(config.key) as f:
|
||||
key = RSA.import_key(f.read())
|
||||
decryptor = PKCS1_OAEP.new(key)
|
||||
decrypted = decryptor.decrypt(ast.literal_eval(encrypted)).decode("utf-8")
|
||||
assert decrypted == original
|
||||
|
||||
|
||||
class TestExplodeUnique:
|
||||
def test_simple_node(self, config):
|
||||
result = config._explode_unique("router1")
|
||||
assert result == {"id": "router1"}
|
||||
|
||||
def test_node_with_folder(self, config):
|
||||
result = config._explode_unique("r1@office")
|
||||
assert result == {"id": "r1", "folder": "office"}
|
||||
|
||||
def test_node_with_subfolder(self, config):
|
||||
result = config._explode_unique("r1@dc@office")
|
||||
assert result == {"id": "r1", "folder": "office", "subfolder": "dc"}
|
||||
|
||||
def test_folder_only(self, config):
|
||||
result = config._explode_unique("@office")
|
||||
assert result == {"folder": "office"}
|
||||
|
||||
def test_subfolder_only(self, config):
|
||||
result = config._explode_unique("@dc@office")
|
||||
assert result == {"folder": "office", "subfolder": "dc"}
|
||||
|
||||
def test_too_deep(self, config):
|
||||
result = config._explode_unique("a@b@c@d")
|
||||
assert result == False
|
||||
|
||||
def test_empty_folder(self, config):
|
||||
result = config._explode_unique("a@")
|
||||
assert result == False
|
||||
|
||||
def test_empty_subfolder(self, config):
|
||||
result = config._explode_unique("a@@office")
|
||||
assert result == False
|
||||
|
||||
|
||||
class TestCRUDNodes:
|
||||
def test_add_node_root(self, config):
|
||||
config._connections_add(
|
||||
id="router1", host="10.0.0.1", protocol="ssh",
|
||||
port="22", user="admin", password="pass", options="",
|
||||
logs="", tags="", jumphost=""
|
||||
)
|
||||
assert "router1" in config.connections
|
||||
assert config.connections["router1"]["host"] == "10.0.0.1"
|
||||
|
||||
def test_add_node_folder(self, config):
|
||||
config._folder_add(folder="office")
|
||||
config._connections_add(
|
||||
id="server1", folder="office", host="10.0.1.1",
|
||||
protocol="ssh", port="", user="root", password="pass",
|
||||
options="", logs="", tags="", jumphost=""
|
||||
)
|
||||
assert "server1" in config.connections["office"]
|
||||
|
||||
def test_add_node_subfolder(self, config):
|
||||
config._folder_add(folder="office")
|
||||
config._folder_add(folder="office", subfolder="dc")
|
||||
config._connections_add(
|
||||
id="db1", folder="office", subfolder="dc", host="10.0.2.1",
|
||||
protocol="ssh", port="", user="dbadmin", password="pass",
|
||||
options="", logs="", tags="", jumphost=""
|
||||
)
|
||||
assert "db1" in config.connections["office"]["dc"]
|
||||
|
||||
def test_del_node_root(self, config):
|
||||
config._connections_add(
|
||||
id="router1", host="10.0.0.1", protocol="ssh",
|
||||
port="", user="", password="", options="",
|
||||
logs="", tags="", jumphost=""
|
||||
)
|
||||
config._connections_del(id="router1")
|
||||
assert "router1" not in config.connections
|
||||
|
||||
def test_del_node_folder(self, config):
|
||||
config._folder_add(folder="office")
|
||||
config._connections_add(
|
||||
id="server1", folder="office", host="10.0.1.1",
|
||||
protocol="ssh", port="", user="", password="",
|
||||
options="", logs="", tags="", jumphost=""
|
||||
)
|
||||
config._connections_del(id="server1", folder="office")
|
||||
assert "server1" not in config.connections["office"]
|
||||
|
||||
def test_add_folder(self, config):
|
||||
config._folder_add(folder="office")
|
||||
assert "office" in config.connections
|
||||
assert config.connections["office"]["type"] == "folder"
|
||||
|
||||
def test_add_subfolder(self, config):
|
||||
config._folder_add(folder="office")
|
||||
config._folder_add(folder="office", subfolder="dc")
|
||||
assert "dc" in config.connections["office"]
|
||||
assert config.connections["office"]["dc"]["type"] == "subfolder"
|
||||
|
||||
def test_del_folder(self, config):
|
||||
config._folder_add(folder="office")
|
||||
config._folder_del(folder="office")
|
||||
assert "office" not in config.connections
|
||||
|
||||
def test_del_subfolder(self, config):
|
||||
config._folder_add(folder="office")
|
||||
config._folder_add(folder="office", subfolder="dc")
|
||||
config._folder_del(folder="office", subfolder="dc")
|
||||
assert "dc" not in config.connections["office"]
|
||||
|
||||
|
||||
class TestCRUDProfiles:
|
||||
def test_add_profile(self, config):
|
||||
config._profiles_add(
|
||||
id="myprofile", host="", protocol="telnet",
|
||||
port="23", user="user1", password="pass1",
|
||||
options="", logs="", tags="", jumphost=""
|
||||
)
|
||||
assert "myprofile" in config.profiles
|
||||
assert config.profiles["myprofile"]["protocol"] == "telnet"
|
||||
|
||||
def test_del_profile(self, config):
|
||||
config._profiles_add(
|
||||
id="temp", host="", protocol="ssh", port="",
|
||||
user="", password="", options="", logs="", tags="", jumphost=""
|
||||
)
|
||||
config._profiles_del(id="temp")
|
||||
assert "temp" not in config.profiles
|
||||
|
||||
def test_default_profile_exists(self, config):
|
||||
assert "default" in config.profiles
|
||||
|
||||
|
||||
class TestGetItem:
|
||||
def test_getitem_node(self, populated_config):
|
||||
node = populated_config.getitem("router1")
|
||||
assert node["host"] == "10.0.0.1"
|
||||
assert "type" not in node # type is stripped
|
||||
|
||||
def test_getitem_folder(self, populated_config):
|
||||
nodes = populated_config.getitem("@office")
|
||||
# Should contain server1@office but NOT datacenter (subfolder)
|
||||
assert "server1@office" in nodes
|
||||
assert all("type" not in v for v in nodes.values())
|
||||
|
||||
def test_getitem_subfolder(self, populated_config):
|
||||
nodes = populated_config.getitem("@datacenter@office")
|
||||
assert "db1@datacenter@office" in nodes
|
||||
|
||||
def test_getitem_node_in_folder(self, populated_config):
|
||||
node = populated_config.getitem("server1@office")
|
||||
assert node["host"] == "10.0.1.1"
|
||||
|
||||
def test_getitem_node_in_subfolder(self, populated_config):
|
||||
node = populated_config.getitem("db1@datacenter@office")
|
||||
assert node["host"] == "10.0.2.1"
|
||||
|
||||
def test_getitem_with_profile_extraction(self, tmp_config_dir):
|
||||
"""extract=True resolves @profile references."""
|
||||
config_file = tmp_config_dir / "config.yaml"
|
||||
data = {
|
||||
"config": {"case": False, "idletime": 30, "fzf": False},
|
||||
"connections": {
|
||||
"router1": {
|
||||
"host": "10.0.0.1", "protocol": "ssh", "port": "",
|
||||
"user": "@office-user", "password": "@office-user",
|
||||
"options": "", "logs": "", "tags": "", "jumphost": "",
|
||||
"type": "connection"
|
||||
}
|
||||
},
|
||||
"profiles": {
|
||||
"default": {"host": "", "protocol": "ssh", "port": "",
|
||||
"user": "", "password": "", "options": "",
|
||||
"logs": "", "tags": "", "jumphost": ""},
|
||||
"office-user": {"host": "", "protocol": "ssh", "port": "",
|
||||
"user": "officeadmin", "password": "officepass",
|
||||
"options": "", "logs": "", "tags": "", "jumphost": ""}
|
||||
}
|
||||
}
|
||||
config_file.write_text(yaml.dump(data, default_flow_style=False, sort_keys=False))
|
||||
|
||||
from connpy.configfile import configfile
|
||||
conf = configfile(conf=str(config_file), key=str(tmp_config_dir / ".osk"))
|
||||
|
||||
node = conf.getitem("router1", extract=True)
|
||||
assert node["user"] == "officeadmin"
|
||||
assert node["password"] == "officepass"
|
||||
|
||||
def test_getitems_multiple(self, populated_config):
|
||||
nodes = populated_config.getitems(["router1", "server1@office"])
|
||||
assert "router1" in nodes
|
||||
assert "server1@office" in nodes
|
||||
|
||||
def test_getitems_folder(self, populated_config):
|
||||
nodes = populated_config.getitems(["@office"])
|
||||
assert "server1@office" in nodes
|
||||
|
||||
|
||||
class TestGetAll:
|
||||
def test_getallnodes_no_filter(self, populated_config):
|
||||
nodes = populated_config._getallnodes()
|
||||
assert "router1" in nodes
|
||||
assert "server1@office" in nodes
|
||||
assert "db1@datacenter@office" in nodes
|
||||
|
||||
def test_getallnodes_string_filter(self, populated_config):
|
||||
nodes = populated_config._getallnodes("router.*")
|
||||
assert "router1" in nodes
|
||||
assert "server1@office" not in nodes
|
||||
|
||||
def test_getallnodes_list_filter(self, populated_config):
|
||||
nodes = populated_config._getallnodes(["router.*", "db.*"])
|
||||
assert "router1" in nodes
|
||||
assert "db1@datacenter@office" in nodes
|
||||
assert "server1@office" not in nodes
|
||||
|
||||
def test_getallnodes_filter_invalid_type(self, populated_config):
|
||||
with pytest.raises(SystemExit) as exc:
|
||||
populated_config._getallnodes(123)
|
||||
assert exc.value.code == 1
|
||||
|
||||
def test_getallfolders(self, populated_config):
|
||||
folders = populated_config._getallfolders()
|
||||
assert "@office" in folders
|
||||
assert "@datacenter@office" in folders
|
||||
|
||||
def test_getallnodesfull(self, populated_config):
|
||||
nodes = populated_config._getallnodesfull()
|
||||
assert "router1" in nodes
|
||||
assert nodes["router1"]["host"] == "10.0.0.1"
|
||||
|
||||
def test_getallnodesfull_with_filter(self, populated_config):
|
||||
nodes = populated_config._getallnodesfull("router.*")
|
||||
assert "router1" in nodes
|
||||
assert "server1@office" not in nodes
|
||||
|
||||
def test_profileused(self, tmp_config_dir):
|
||||
"""Detects nodes using a specific profile."""
|
||||
config_file = tmp_config_dir / "config.yaml"
|
||||
data = {
|
||||
"config": {"case": False, "idletime": 30, "fzf": False},
|
||||
"connections": {
|
||||
"router1": {
|
||||
"host": "10.0.0.1", "protocol": "ssh", "port": "",
|
||||
"user": "@myprofile", "password": "pass",
|
||||
"options": "", "logs": "", "tags": "", "jumphost": "",
|
||||
"type": "connection"
|
||||
},
|
||||
"router2": {
|
||||
"host": "10.0.0.2", "protocol": "ssh", "port": "",
|
||||
"user": "admin", "password": "pass",
|
||||
"options": "", "logs": "", "tags": "", "jumphost": "",
|
||||
"type": "connection"
|
||||
}
|
||||
},
|
||||
"profiles": {
|
||||
"default": {"host": "", "protocol": "ssh", "port": "",
|
||||
"user": "", "password": "", "options": "",
|
||||
"logs": "", "tags": "", "jumphost": ""},
|
||||
"myprofile": {"host": "", "protocol": "ssh", "port": "",
|
||||
"user": "profuser", "password": "profpass",
|
||||
"options": "", "logs": "", "tags": "", "jumphost": ""}
|
||||
}
|
||||
}
|
||||
config_file.write_text(yaml.dump(data, default_flow_style=False, sort_keys=False))
|
||||
from connpy.configfile import configfile
|
||||
conf = configfile(conf=str(config_file), key=str(tmp_config_dir / ".osk"))
|
||||
|
||||
used = conf._profileused("myprofile")
|
||||
assert "router1" in used
|
||||
assert "router2" not in used
|
||||
|
||||
def test_saveconfig(self, config):
|
||||
"""Save and reload correctly."""
|
||||
config._connections_add(
|
||||
id="test_node", host="1.2.3.4", protocol="ssh",
|
||||
port="", user="", password="", options="",
|
||||
logs="", tags="", jumphost=""
|
||||
)
|
||||
result = config._saveconfig(config.file)
|
||||
assert result == 0
|
||||
|
||||
# Reload and verify
|
||||
from connpy.configfile import configfile
|
||||
reloaded = configfile(conf=config.file, key=config.key)
|
||||
assert "test_node" in reloaded.connections
|
||||
|
||||
|
||||
class TestValidateConfig:
|
||||
def test_valid_config(self, config):
|
||||
data = {"config": {}, "connections": {}, "profiles": {}}
|
||||
assert config._validate_config(data) == True
|
||||
|
||||
def test_none_data(self, config):
|
||||
assert config._validate_config(None) == False
|
||||
|
||||
def test_string_data(self, config):
|
||||
assert config._validate_config("not a dict") == False
|
||||
|
||||
def test_missing_key(self, config):
|
||||
assert config._validate_config({"config": {}, "connections": {}}) == False
|
||||
|
||||
def test_empty_dict(self, config):
|
||||
assert config._validate_config({}) == False
|
||||
|
||||
|
||||
class TestCorruptionRecovery:
|
||||
def test_corrupt_yaml_recovers_from_cache(self, tmp_config_dir):
|
||||
"""If YAML is corrupt but cache is valid, recovers from cache."""
|
||||
config_file = tmp_config_dir / "config.yaml"
|
||||
key_file = tmp_config_dir / ".osk"
|
||||
|
||||
# Write valid config with router1
|
||||
valid_data = {
|
||||
"config": {"case": False, "idletime": 30, "fzf": False},
|
||||
"connections": {"router1": {"host": "10.0.0.1", "type": "connection", "protocol": "ssh", "port": "", "user": "", "password": "", "options": "", "logs": "", "tags": "", "jumphost": ""}},
|
||||
"profiles": {"default": {"host": "", "protocol": "ssh", "port": "", "user": "", "password": "", "options": "", "logs": "", "tags": "", "jumphost": ""}}
|
||||
}
|
||||
config_file.write_text(yaml.dump(valid_data, default_flow_style=False, sort_keys=False))
|
||||
|
||||
from connpy.configfile import configfile
|
||||
conf = configfile(conf=str(config_file), key=str(key_file))
|
||||
# Save to populate cache at the real self.cachefile path
|
||||
conf._saveconfig(conf.file)
|
||||
cachefile_path = conf.cachefile
|
||||
assert os.path.exists(cachefile_path)
|
||||
|
||||
# Now corrupt the YAML
|
||||
config_file.write_text("")
|
||||
import time; time.sleep(0.05) # Ensure YAML is newer than cache
|
||||
|
||||
# Reload - should recover from cache
|
||||
conf2 = configfile(conf=str(config_file), key=str(key_file))
|
||||
assert "router1" in conf2.connections
|
||||
assert conf2.connections["router1"]["host"] == "10.0.0.1"
|
||||
|
||||
def test_corrupt_cache_uses_yaml(self, tmp_config_dir):
|
||||
"""If cache is corrupt but YAML is valid, uses YAML."""
|
||||
config_file = tmp_config_dir / "config.yaml"
|
||||
key_file = tmp_config_dir / ".osk"
|
||||
|
||||
valid_data = {
|
||||
"config": {"case": False, "idletime": 30, "fzf": False},
|
||||
"connections": {},
|
||||
"profiles": {"default": {"host": "", "protocol": "ssh", "port": "", "user": "", "password": "", "options": "", "logs": "", "tags": "", "jumphost": ""}}
|
||||
}
|
||||
config_file.write_text(yaml.dump(valid_data, default_flow_style=False, sort_keys=False))
|
||||
|
||||
from connpy.configfile import configfile
|
||||
conf = configfile(conf=str(config_file), key=str(key_file))
|
||||
cachefile_path = conf.cachefile
|
||||
|
||||
# Now corrupt the cache (valid JSON but invalid config structure)
|
||||
from pathlib import Path
|
||||
Path(cachefile_path).write_text(json.dumps({"garbage": True}))
|
||||
# Make cache newer than YAML to force cache path
|
||||
import time; time.sleep(0.05)
|
||||
os.utime(cachefile_path, None)
|
||||
|
||||
conf2 = configfile(conf=str(config_file), key=str(key_file))
|
||||
assert conf2.config["case"] == False
|
||||
assert "default" in conf2.profiles
|
||||
|
||||
def test_both_corrupt_creates_default(self, tmp_config_dir):
|
||||
"""If both YAML and cache are corrupt, creates fresh config."""
|
||||
config_file = tmp_config_dir / "config.yaml"
|
||||
key_file = tmp_config_dir / ".osk"
|
||||
|
||||
from connpy.configfile import configfile
|
||||
conf = configfile(conf=str(config_file), key=str(key_file))
|
||||
cachefile_path = conf.cachefile
|
||||
|
||||
# Corrupt YAML
|
||||
config_file.write_text("")
|
||||
# Corrupt cache
|
||||
from pathlib import Path
|
||||
Path(cachefile_path).write_text(json.dumps({"garbage": True}))
|
||||
import time; time.sleep(0.05)
|
||||
os.utime(str(config_file), None)
|
||||
|
||||
conf2 = configfile(conf=str(config_file), key=str(key_file))
|
||||
|
||||
# Should get defaults, not crash
|
||||
assert conf2.config is not None
|
||||
assert "default" in conf2.profiles
|
||||
assert isinstance(conf2.connections, dict)
|
||||
|
||||
|
||||
class TestAtomicSave:
|
||||
def test_save_creates_no_leftover_tmp(self, config):
|
||||
"""After successful save, no .tmp file remains."""
|
||||
config._connections_add(
|
||||
id="test123", host="1.2.3.4", protocol="ssh",
|
||||
port="", user="", password="", options="",
|
||||
logs="", tags="", jumphost=""
|
||||
)
|
||||
result = config._saveconfig(config.file)
|
||||
assert result == 0
|
||||
assert not os.path.exists(config.file + '.tmp')
|
||||
|
||||
def test_save_preserves_original_on_error(self, config):
|
||||
"""If save fails, original config file is not corrupted."""
|
||||
import unittest.mock as mock
|
||||
|
||||
config._connections_add(
|
||||
id="original_node", host="10.0.0.1", protocol="ssh",
|
||||
port="", user="", password="", options="",
|
||||
logs="", tags="", jumphost=""
|
||||
)
|
||||
config._saveconfig(config.file)
|
||||
|
||||
# Now add another node and make yaml.dump fail
|
||||
config._connections_add(
|
||||
id="new_node", host="10.0.0.2", protocol="ssh",
|
||||
port="", user="", password="", options="",
|
||||
logs="", tags="", jumphost=""
|
||||
)
|
||||
|
||||
with mock.patch('connpy.configfile.yaml.dump', side_effect=IOError("disk full")):
|
||||
result = config._saveconfig(config.file)
|
||||
assert result == 1
|
||||
|
||||
# Original file should still be valid with original_node
|
||||
from connpy.configfile import configfile
|
||||
reloaded = configfile(conf=config.file, key=config.key)
|
||||
assert "original_node" in reloaded.connections
|
||||
|
||||
|
||||
class TestMigrationSafety:
|
||||
def test_migration_validates_legacy_data(self, tmp_path):
|
||||
"""Migration skips invalid legacy JSON files."""
|
||||
from unittest.mock import patch
|
||||
config_dir = tmp_path / ".config" / "conn"
|
||||
config_dir.mkdir(parents=True)
|
||||
(config_dir / "plugins").mkdir()
|
||||
|
||||
# Write .folder
|
||||
(config_dir / ".folder").write_text(str(config_dir))
|
||||
|
||||
# Generate RSA key
|
||||
from Crypto.PublicKey import RSA
|
||||
key = RSA.generate(2048)
|
||||
key_file = config_dir / ".osk"
|
||||
key_file.write_bytes(key.export_key("PEM"))
|
||||
os.chmod(str(key_file), 0o600)
|
||||
|
||||
# Write invalid JSON config (missing required keys)
|
||||
legacy_file = config_dir / "config.json"
|
||||
legacy_file.write_text(json.dumps({"garbage": True}))
|
||||
|
||||
with patch("os.path.expanduser", return_value=str(tmp_path)):
|
||||
from connpy.configfile import configfile
|
||||
conf = configfile(key=str(key_file))
|
||||
|
||||
# Legacy file should NOT have been moved to .backup
|
||||
assert legacy_file.exists()
|
||||
assert not (config_dir / "config.json.backup").exists()
|
||||
|
||||
def test_migration_verifies_written_yaml(self, tmp_path):
|
||||
"""Migration succeeds when legacy JSON is valid."""
|
||||
from unittest.mock import patch
|
||||
config_dir = tmp_path / ".config" / "conn"
|
||||
config_dir.mkdir(parents=True)
|
||||
(config_dir / "plugins").mkdir()
|
||||
|
||||
# Write .folder
|
||||
(config_dir / ".folder").write_text(str(config_dir))
|
||||
|
||||
# Generate RSA key
|
||||
from Crypto.PublicKey import RSA
|
||||
key = RSA.generate(2048)
|
||||
key_file = config_dir / ".osk"
|
||||
key_file.write_bytes(key.export_key("PEM"))
|
||||
os.chmod(str(key_file), 0o600)
|
||||
|
||||
valid_data = {
|
||||
"config": {"case": False, "idletime": 30, "fzf": False},
|
||||
"connections": {"r1": {"host": "1.2.3.4", "type": "connection", "protocol": "ssh", "port": "", "user": "", "password": "", "options": "", "logs": "", "tags": "", "jumphost": ""}},
|
||||
"profiles": {"default": {"host": "", "protocol": "ssh", "port": "", "user": "", "password": "", "options": "", "logs": "", "tags": "", "jumphost": ""}}
|
||||
}
|
||||
legacy_file = config_dir / "config.json"
|
||||
legacy_file.write_text(json.dumps(valid_data))
|
||||
|
||||
with patch("os.path.expanduser", return_value=str(tmp_path)):
|
||||
from connpy.configfile import configfile
|
||||
conf = configfile(key=str(key_file))
|
||||
|
||||
# Migration should have succeeded: YAML exists, JSON backed up
|
||||
yaml_file = config_dir / "config.yaml"
|
||||
assert yaml_file.exists()
|
||||
assert (config_dir / "config.json.backup").exists()
|
||||
assert not legacy_file.exists()
|
||||
assert "r1" in conf.connections
|
||||
@@ -1,264 +0,0 @@
|
||||
import pytest
|
||||
from unittest.mock import patch, MagicMock
|
||||
from connpy.connapp import connapp
|
||||
import sys
|
||||
import yaml
|
||||
import os
|
||||
|
||||
@pytest.fixture
|
||||
def app(populated_config):
|
||||
"""Returns an instance of connapp initialized with the mock config."""
|
||||
return connapp(populated_config)
|
||||
|
||||
def test_connapp_init(app, populated_config):
|
||||
"""Test that connapp initializes correctly with config."""
|
||||
assert app.config == populated_config
|
||||
assert app.case == populated_config.config.get("case", False)
|
||||
|
||||
@patch("connpy.cli.node_handler.NodeHandler.dispatch")
|
||||
def test_node_default(mock_func_node, app):
|
||||
"""Test that default 'node' command correctly parses and calls _func_node."""
|
||||
app.start(["node", "router1"])
|
||||
mock_func_node.assert_called_once()
|
||||
args = mock_func_node.call_args[0][0]
|
||||
assert args.data == "router1"
|
||||
assert args.action == "connect"
|
||||
|
||||
@patch("connpy.cli.node_handler.NodeHandler.dispatch")
|
||||
def test_node_add(mock_func_node, app):
|
||||
"""Test that 'node -a' command correctly parses."""
|
||||
app.start(["node", "-a", "new_router"])
|
||||
mock_func_node.assert_called_once()
|
||||
args = mock_func_node.call_args[0][0]
|
||||
assert args.data == "new_router"
|
||||
assert args.action == "add"
|
||||
|
||||
@patch("connpy.services.node_service.NodeService.list_nodes")
|
||||
@patch("connpy.services.node_service.NodeService.delete_node")
|
||||
@patch("inquirer.prompt")
|
||||
def test_node_del(mock_prompt, mock_delete_node, mock_list_nodes, app):
|
||||
mock_list_nodes.return_value = ["router1"]
|
||||
mock_prompt.return_value = {"delete": True}
|
||||
app.start(["node", "-r", "router1"])
|
||||
mock_delete_node.assert_called_once_with("router1", is_folder=False)
|
||||
|
||||
@patch("connpy.services.node_service.NodeService.list_nodes")
|
||||
@patch("connpy.services.node_service.NodeService.get_node_details")
|
||||
@patch("connpy.services.node_service.NodeService.update_node")
|
||||
@patch("connpy.cli.forms.Forms.questions_edit")
|
||||
@patch("connpy.cli.forms.Forms.questions_nodes")
|
||||
def test_node_mod(mock_q_nodes, mock_q_edit, mock_update_node, mock_get_details, mock_list_nodes, app):
|
||||
mock_list_nodes.return_value = ["router1"]
|
||||
mock_get_details.return_value = {"host": "1.1.1.1", "port": 22}
|
||||
mock_q_edit.return_value = {"host": True}
|
||||
mock_q_nodes.return_value = {"host": "2.2.2.2", "port": 22}
|
||||
|
||||
app.start(["node", "-e", "router1"])
|
||||
mock_update_node.assert_called_once()
|
||||
|
||||
@patch("connpy.printer.data")
|
||||
def test_node_show(mock_data, app):
|
||||
app.nodes_list = ["router1"]
|
||||
app.config.getitem = MagicMock(return_value={"host": "1.1.1.1"})
|
||||
app.start(["node", "-s", "router1"])
|
||||
mock_data.assert_called()
|
||||
|
||||
@patch("connpy.services.profile_service.ProfileService.list_profiles")
|
||||
@patch("connpy.connapp.printer.console.print")
|
||||
def test_profile_list(mock_print, mock_list_profiles, app):
|
||||
"""Test 'profile list' invokes profile service correctly."""
|
||||
mock_list_profiles.return_value = ["default", "office-user"]
|
||||
app.start(["list", "profiles"])
|
||||
assert mock_list_profiles.call_count >= 2
|
||||
|
||||
@patch("connpy.services.node_service.NodeService.list_nodes")
|
||||
def test_node_list(mock_list_nodes, app):
|
||||
"""Test 'list nodes' invokes node service."""
|
||||
mock_list_nodes.return_value = ["router1", "server1"]
|
||||
app.start(["list", "nodes"])
|
||||
# Should be called during init and during the list command
|
||||
assert mock_list_nodes.call_count >= 2
|
||||
|
||||
@patch("connpy.services.system_service.SystemService.get_api_status")
|
||||
def test_api_stop(mock_status, app):
|
||||
mock_status.return_value = {"running": True, "pid": "1234"}
|
||||
app.services.system.stop_api = MagicMock(return_value=True)
|
||||
app.start(["api", "-x"])
|
||||
app.services.system.stop_api.assert_called_once()
|
||||
|
||||
@patch("connpy.services.profile_service.ProfileService.list_profiles")
|
||||
@patch("connpy.services.profile_service.ProfileService.add_profile")
|
||||
@patch("connpy.cli.forms.Forms.questions_profiles")
|
||||
def test_profile_add(mock_q_profiles, mock_add_profile, mock_list_profiles, app):
|
||||
mock_list_profiles.return_value = ["default"]
|
||||
mock_q_profiles.return_value = {"host": "test"}
|
||||
app.start(["profile", "-a", "new_profile"])
|
||||
mock_add_profile.assert_called_once_with("new_profile", {"host": "test"})
|
||||
|
||||
@patch("connpy.services.profile_service.ProfileService.get_profile")
|
||||
@patch("connpy.services.profile_service.ProfileService.delete_profile")
|
||||
@patch("inquirer.prompt")
|
||||
def test_profile_del(mock_prompt, mock_delete_profile, mock_get_profile, app):
|
||||
mock_get_profile.return_value = {"host": "test"}
|
||||
mock_prompt.return_value = {"delete": True}
|
||||
app.start(["profile", "-r", "test_profile"])
|
||||
mock_delete_profile.assert_called_once_with("test_profile")
|
||||
|
||||
@patch("connpy.services.profile_service.ProfileService.get_profile")
|
||||
@patch("connpy.services.profile_service.ProfileService.update_profile")
|
||||
@patch("connpy.cli.forms.Forms.questions_edit")
|
||||
@patch("connpy.cli.forms.Forms.questions_profiles")
|
||||
def test_profile_mod(mock_q_profiles, mock_q_edit, mock_update_profile, mock_get_profile, app):
|
||||
mock_get_profile.return_value = {"host": "test", "port": 22}
|
||||
mock_q_edit.return_value = {"host": True}
|
||||
mock_q_profiles.return_value = {"id": "test_profile", "host": "new_host", "port": 22}
|
||||
app.start(["profile", "-e", "test_profile"])
|
||||
mock_update_profile.assert_called_once_with("test_profile", {"id": "test_profile", "host": "new_host", "port": 22})
|
||||
|
||||
@patch("connpy.services.profile_service.ProfileService.get_profile")
|
||||
@patch("connpy.printer.data")
|
||||
def test_profile_show(mock_data, mock_get_profile, app):
|
||||
mock_get_profile.return_value = {"host": "test"}
|
||||
app.start(["profile", "-s", "test_profile"])
|
||||
mock_data.assert_called()
|
||||
|
||||
@patch("connpy.services.node_service.NodeService.move_node")
|
||||
def test_move(mock_move_node, app):
|
||||
app.start(["move", "src_node", "dst_node"])
|
||||
mock_move_node.assert_called_once_with("src_node", "dst_node", copy=False)
|
||||
|
||||
@patch("connpy.services.node_service.NodeService.move_node")
|
||||
def test_copy(mock_move_node, app):
|
||||
app.start(["copy", "src_node", "dst_node"])
|
||||
mock_move_node.assert_called_once_with("src_node", "dst_node", copy=True)
|
||||
|
||||
@patch("connpy.cli.forms.Forms.questions_bulk")
|
||||
@patch("connpy.services.node_service.NodeService.bulk_add")
|
||||
def test_bulk(mock_bulk_add, mock_q_bulk, app):
|
||||
mock_q_bulk.return_value = {"ids": "node1", "host": "host1", "location": ""}
|
||||
mock_bulk_add.return_value = 1
|
||||
app.start(["bulk"])
|
||||
mock_bulk_add.assert_called_once()
|
||||
|
||||
@patch("connpy.services.import_export_service.ImportExportService.export_to_file")
|
||||
def test_export(mock_export, app):
|
||||
with pytest.raises(SystemExit):
|
||||
app.start(["export", "file.yml", "@folder1"])
|
||||
mock_export.assert_called_once_with("file.yml", folders=["@folder1"])
|
||||
|
||||
@patch("os.path.exists")
|
||||
@patch("inquirer.prompt")
|
||||
@patch("connpy.services.import_export_service.ImportExportService.import_from_file")
|
||||
def test_import(mock_import, mock_prompt, mock_exists, app):
|
||||
mock_exists.return_value = True
|
||||
mock_prompt.return_value = {"import": True}
|
||||
app.start(["import", "file.yml"])
|
||||
mock_import.assert_called_once_with("file.yml")
|
||||
|
||||
@patch("connpy.services.ai_service.AIService.ask")
|
||||
@patch("connpy.connapp.console.status")
|
||||
def test_ai(mock_status, mock_ask, app):
|
||||
mock_ask.return_value = {"response": "AI output", "usage": {"total": 10, "input": 5, "output": 5}}
|
||||
|
||||
app.start(["ai", "--engineer-api-key", "testkey", "how are you"])
|
||||
mock_ask.assert_called_once()
|
||||
|
||||
@patch("connpy.services.execution_service.ExecutionService.run_commands")
|
||||
def test_run(mock_run_commands, app):
|
||||
app.start(["run", "node1", "command1", "command2"])
|
||||
mock_run_commands.assert_called_once()
|
||||
assert mock_run_commands.call_args[1]["nodes_filter"] == "node1"
|
||||
assert mock_run_commands.call_args[1]["commands"] == ["command1 command2"]
|
||||
|
||||
@patch("os.path.exists")
|
||||
@patch("shutil.copy2")
|
||||
@patch("connpy.plugins.Plugins.verify_script")
|
||||
def test_plugin_add(mock_verify, mock_copy, mock_exists, app):
|
||||
def mock_exists_side_effect(path):
|
||||
if "testplug.py" in path: return False
|
||||
if "testplug.py.bkp" in path: return False
|
||||
if "file.py" in path: return True
|
||||
return True
|
||||
mock_exists.side_effect = mock_exists_side_effect
|
||||
mock_verify.return_value = None
|
||||
app.commands = []
|
||||
app.start(["plugin", "--add", "testplug", "file.py"])
|
||||
mock_copy.assert_called()
|
||||
|
||||
@patch("connpy.services.config_service.ConfigService.update_setting")
|
||||
def test_config(mock_update_setting, app):
|
||||
app.start(["config", "--allow-uppercase", "true"])
|
||||
mock_update_setting.assert_called_with("case", True)
|
||||
|
||||
@patch("connpy.services.system_service.SystemService.get_api_status")
|
||||
def test_api_start(mock_status, app):
|
||||
mock_status.return_value = {"running": False}
|
||||
app.services.system.start_api = MagicMock()
|
||||
app.start(["api", "-s", "8080"])
|
||||
app.services.system.start_api.assert_called_once_with(port=8080)
|
||||
|
||||
@patch("connpy.services.system_service.SystemService.get_api_status")
|
||||
def test_api_debug(mock_status, app):
|
||||
mock_status.return_value = {"running": False}
|
||||
app.services.system.debug_api = MagicMock()
|
||||
app.start(["api", "-d", "8080"])
|
||||
app.services.system.debug_api.assert_called_once_with(port=8080)
|
||||
|
||||
@patch("connpy.services.node_service.NodeService.list_folders")
|
||||
def test_list_folders(mock_list_folders, app):
|
||||
mock_list_folders.return_value = ["folder1"]
|
||||
app.start(["list", "folders"])
|
||||
# Called during init and during the list command
|
||||
assert mock_list_folders.call_count >= 2
|
||||
|
||||
@patch("connpy.services.config_service.ConfigService.update_setting")
|
||||
def test_config_various(mock_update_setting, app):
|
||||
app.start(["config", "--fzf", "true"])
|
||||
mock_update_setting.assert_called_with("fzf", True)
|
||||
app.start(["config", "--keepalive", "60"])
|
||||
mock_update_setting.assert_called_with("idletime", 60)
|
||||
|
||||
@patch("connpy.services.config_service.ConfigService.set_config_folder")
|
||||
def test_config_folder(mock_set_config_folder, app):
|
||||
app.start(["config", "--configfolder", "/new/path"])
|
||||
mock_set_config_folder.assert_called_once_with("/new/path")
|
||||
|
||||
@patch("connpy.services.plugin_service.PluginService.list_plugins")
|
||||
def test_plugin_list(mock_list_plugins, app):
|
||||
mock_list_plugins.return_value = {"testplug": {"enabled": True}}
|
||||
app.start(["plugin", "--list"])
|
||||
mock_list_plugins.assert_called_once()
|
||||
|
||||
@patch("connpy.services.plugin_service.PluginService.delete_plugin")
|
||||
def test_plugin_delete(mock_delete, app):
|
||||
app.start(["plugin", "--del", "testplug"])
|
||||
mock_delete.assert_called_once_with("testplug")
|
||||
|
||||
@patch("connpy.services.plugin_service.PluginService.enable_plugin")
|
||||
def test_plugin_enable(mock_enable, app):
|
||||
app.start(["plugin", "--enable", "testplug"])
|
||||
mock_enable.assert_called_once_with("testplug")
|
||||
|
||||
@patch("connpy.services.plugin_service.PluginService.disable_plugin")
|
||||
def test_plugin_disable(mock_disable, app):
|
||||
app.start(["plugin", "--disable", "testplug"])
|
||||
mock_disable.assert_called_once_with("testplug")
|
||||
|
||||
@patch("connpy.services.ai_service.AIService.list_sessions")
|
||||
def test_ai_list(mock_list_sessions, app):
|
||||
mock_list_sessions.return_value = [{"id": "1", "title": "t", "created_at": "now", "model": "m"}]
|
||||
app.start(["ai", "--list"])
|
||||
mock_list_sessions.assert_called_once()
|
||||
|
||||
def test_type_node_reserved_word(app):
|
||||
app.commands = ["bulk", "ai", "run"]
|
||||
with patch("sys.argv", ["connpy", "node", "-a", "bulk"]):
|
||||
with pytest.raises(SystemExit) as exc:
|
||||
app._type_node("bulk")
|
||||
assert exc.value.code == 2
|
||||
|
||||
# In move/copy it also raises because destination cannot be reserved
|
||||
with patch("sys.argv", ["connpy", "mv", "test1", "bulk"]):
|
||||
with pytest.raises(SystemExit) as exc:
|
||||
app._type_node("bulk")
|
||||
assert exc.value.code == 2
|
||||
@@ -1,437 +0,0 @@
|
||||
"""Tests for connpy.core module — node and nodes classes."""
|
||||
import json
|
||||
import os
|
||||
import io
|
||||
import re
|
||||
import pytest
|
||||
from unittest.mock import patch, MagicMock, PropertyMock
|
||||
from copy import deepcopy
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# node.__init__ tests
|
||||
# =========================================================================
|
||||
|
||||
class TestNodeInit:
|
||||
def test_basic_init(self):
|
||||
"""Creates node with basic attributes."""
|
||||
from connpy.core import node
|
||||
n = node("router1", "10.0.0.1", user="admin", password="pass1", protocol="ssh")
|
||||
assert n.unique == "router1"
|
||||
assert n.host == "10.0.0.1"
|
||||
assert n.user == "admin"
|
||||
assert n.protocol == "ssh"
|
||||
assert n.password == ["pass1"]
|
||||
|
||||
def test_default_protocol(self):
|
||||
"""Default protocol is ssh."""
|
||||
from connpy.core import node
|
||||
n = node("router1", "10.0.0.1")
|
||||
assert n.protocol == "ssh"
|
||||
|
||||
def test_password_as_list_of_profiles(self, populated_config):
|
||||
"""Password list with @profile references resolves correctly."""
|
||||
from connpy.core import node
|
||||
n = node("router1", "10.0.0.1", password=["@office-user"],
|
||||
config=populated_config)
|
||||
assert n.password == ["officepass"]
|
||||
|
||||
def test_password_plain_string(self):
|
||||
"""Plain string password is wrapped in a list."""
|
||||
from connpy.core import node
|
||||
n = node("router1", "10.0.0.1", password="mypass")
|
||||
assert n.password == ["mypass"]
|
||||
|
||||
def test_node_with_profile(self, populated_config):
|
||||
"""Resolves @profile references for user."""
|
||||
from connpy.core import node
|
||||
n = node("test1", "10.0.0.1", user="@office-user", password="plain",
|
||||
config=populated_config)
|
||||
assert n.user == "officeadmin"
|
||||
|
||||
def test_node_tags(self):
|
||||
"""Tags are stored correctly."""
|
||||
from connpy.core import node
|
||||
tags = {"os": "cisco_ios", "prompt": r"Router#"}
|
||||
n = node("router1", "10.0.0.1", tags=tags)
|
||||
assert n.tags["os"] == "cisco_ios"
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# Command generation tests
|
||||
# =========================================================================
|
||||
|
||||
class TestCommandGeneration:
|
||||
def _make_node(self, **kwargs):
|
||||
from connpy.core import node
|
||||
defaults = {
|
||||
"unique": "test", "host": "10.0.0.1", "protocol": "ssh",
|
||||
"user": "admin", "password": "", "port": "", "options": "",
|
||||
"jumphost": "", "tags": "", "logs": ""
|
||||
}
|
||||
defaults.update(kwargs)
|
||||
return node(defaults.pop("unique"), defaults.pop("host"), **defaults)
|
||||
|
||||
def test_ssh_cmd_basic(self):
|
||||
n = self._make_node()
|
||||
cmd = n._get_cmd()
|
||||
assert "ssh" in cmd
|
||||
assert "admin@10.0.0.1" in cmd
|
||||
|
||||
def test_ssh_cmd_port(self):
|
||||
n = self._make_node(port="2222")
|
||||
cmd = n._get_cmd()
|
||||
assert "-p 2222" in cmd
|
||||
|
||||
def test_ssh_cmd_options(self):
|
||||
n = self._make_node(options="-o StrictHostKeyChecking=no")
|
||||
cmd = n._get_cmd()
|
||||
assert "-o StrictHostKeyChecking=no" in cmd
|
||||
|
||||
def test_sftp_cmd_port(self):
|
||||
n = self._make_node(protocol="sftp", port="2222")
|
||||
cmd = n._get_cmd()
|
||||
assert "-P 2222" in cmd # SFTP uses uppercase P
|
||||
|
||||
def test_telnet_cmd(self):
|
||||
n = self._make_node(protocol="telnet", port="23")
|
||||
cmd = n._get_cmd()
|
||||
assert "telnet 10.0.0.1" in cmd
|
||||
assert "23" in cmd
|
||||
|
||||
def test_ssm_cmd_basic(self):
|
||||
n = self._make_node(protocol="ssm", host="i-12345")
|
||||
cmd = n._get_cmd()
|
||||
assert "aws ssm start-session" in cmd
|
||||
assert "--target i-12345" in cmd
|
||||
|
||||
def test_ssm_cmd_tags(self):
|
||||
n = self._make_node(protocol="ssm", host="i-12345", tags={"region": "us-west-2", "profile": "prod"})
|
||||
cmd = n._get_cmd()
|
||||
assert "--region us-west-2" in cmd
|
||||
assert "--profile prod" in cmd
|
||||
|
||||
def test_ssm_cmd_options(self):
|
||||
n = self._make_node(protocol="ssm", host="i-12345", options="--document-name AWS-StartInteractiveCommand")
|
||||
cmd = n._get_cmd()
|
||||
assert "--document-name AWS-StartInteractiveCommand" in cmd
|
||||
|
||||
def test_kubectl_cmd(self):
|
||||
n = self._make_node(protocol="kubectl", host="my-pod", tags={"kube_command": "/bin/sh"})
|
||||
cmd = n._get_cmd()
|
||||
assert "kubectl exec" in cmd
|
||||
assert "my-pod" in cmd
|
||||
assert "/bin/sh" in cmd
|
||||
|
||||
def test_kubectl_cmd_default_command(self):
|
||||
n = self._make_node(protocol="kubectl", host="my-pod")
|
||||
cmd = n._get_cmd()
|
||||
assert "/bin/bash" in cmd
|
||||
|
||||
def test_docker_cmd(self):
|
||||
n = self._make_node(protocol="docker", host="my-container",
|
||||
tags={"docker_command": "/bin/sh"})
|
||||
cmd = n._get_cmd()
|
||||
assert "docker" in cmd
|
||||
assert "my-container" in cmd
|
||||
assert "/bin/sh" in cmd
|
||||
|
||||
def test_invalid_protocol_raises(self):
|
||||
n = self._make_node(protocol="invalid_proto")
|
||||
with pytest.raises(SystemExit) as exc:
|
||||
n._get_cmd()
|
||||
assert exc.value.code == 1
|
||||
|
||||
def test_ssh_cmd_no_user(self):
|
||||
n = self._make_node(user="")
|
||||
cmd = n._get_cmd()
|
||||
assert "10.0.0.1" in cmd
|
||||
assert "@" not in cmd # No user@ prefix
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# Password decryption tests
|
||||
# =========================================================================
|
||||
|
||||
class TestPasswordDecryption:
|
||||
def test_passtx_plaintext(self, config):
|
||||
"""Plaintext passwords pass through unchanged."""
|
||||
from connpy.core import node
|
||||
n = node("test", "10.0.0.1", password="plainpass", config=config)
|
||||
result = n._passtx(["plainpass"])
|
||||
assert result == ["plainpass"]
|
||||
|
||||
def test_passtx_encrypted(self, config):
|
||||
"""Encrypted passwords get decrypted."""
|
||||
from connpy.core import node
|
||||
encrypted = config.encrypt("mysecret")
|
||||
n = node("test", "10.0.0.1", password=encrypted, config=config)
|
||||
result = n._passtx([encrypted])
|
||||
assert result == ["mysecret"]
|
||||
|
||||
def test_passtx_missing_key_raises(self):
|
||||
"""Missing key file raises ValueError."""
|
||||
from connpy.core import node
|
||||
n = node("test", "10.0.0.1", password="pass")
|
||||
# A password formatted as encrypted but no valid key
|
||||
with pytest.raises((ValueError, Exception)):
|
||||
n._passtx(["""b'corrupted_encrypted_data'"""], keyfile="/nonexistent")
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# Log handling tests
|
||||
# =========================================================================
|
||||
|
||||
class TestLogHandling:
|
||||
def test_logfile_variable_substitution(self):
|
||||
from connpy.core import node
|
||||
n = node("router1", "10.0.0.1", user="admin", protocol="ssh", port="22",
|
||||
logs="/logs/${unique}_${host}_${user}")
|
||||
result = n._logfile()
|
||||
assert result == "/logs/router1_10.0.0.1_admin"
|
||||
|
||||
def test_logfile_date_substitution(self):
|
||||
from connpy.core import node
|
||||
import datetime
|
||||
n = node("router1", "10.0.0.1", logs="/logs/${date '%Y'}")
|
||||
result = n._logfile()
|
||||
assert datetime.datetime.now().strftime("%Y") in result
|
||||
|
||||
def test_logclean_removes_ansi(self):
|
||||
from connpy.core import node
|
||||
n = node("test", "10.0.0.1")
|
||||
dirty = "\x1B[32mgreen text\x1B[0m"
|
||||
clean = n._logclean(dirty, var=True)
|
||||
assert "\x1B" not in clean
|
||||
assert "green text" in clean
|
||||
|
||||
def test_logclean_removes_backspaces(self):
|
||||
from connpy.core import node
|
||||
n = node("test", "10.0.0.1")
|
||||
dirty = "type\bo"
|
||||
clean = n._logclean(dirty, var=True)
|
||||
assert "\b" not in clean
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# run() and test() with mock pexpect
|
||||
# =========================================================================
|
||||
|
||||
class TestNodeRun:
|
||||
def _make_connected_node(self, mock_pexpect_obj, **kwargs):
|
||||
"""Create a node and mock its _connect to succeed."""
|
||||
from connpy.core import node
|
||||
defaults = {
|
||||
"unique": "router1", "host": "10.0.0.1",
|
||||
"protocol": "ssh", "user": "admin", "password": ""
|
||||
}
|
||||
defaults.update(kwargs)
|
||||
n = node(defaults.pop("unique"), defaults.pop("host"), **defaults)
|
||||
return n
|
||||
|
||||
def test_run_returns_output(self, mock_pexpect):
|
||||
"""run() returns string output."""
|
||||
child = mock_pexpect["child"]
|
||||
pexp = mock_pexpect["pexpect"]
|
||||
|
||||
# Simulate: connect succeeds, command runs, prompt found
|
||||
child.expect.return_value = 9 # prompt index for ssh
|
||||
child.logfile_read = None
|
||||
|
||||
from connpy.core import node
|
||||
n = node("router1", "10.0.0.1", user="admin", password="")
|
||||
|
||||
# Mock _connect to return True and set up child
|
||||
with patch.object(n, '_connect', return_value=True):
|
||||
n.child = child
|
||||
log_buffer = io.BytesIO(b"show version\nRouter v1.0\nrouter#")
|
||||
n.mylog = log_buffer
|
||||
child.logfile_read = log_buffer
|
||||
|
||||
with patch.object(n, '_logclean', return_value="Router v1.0"):
|
||||
output = n.run(["show version"])
|
||||
|
||||
assert n.status == 0
|
||||
assert output == "Router v1.0"
|
||||
|
||||
def test_run_status_1_on_failure(self, mock_pexpect):
|
||||
"""Status 1 when connection fails."""
|
||||
from connpy.core import node
|
||||
n = node("router1", "10.0.0.1", user="admin", password="")
|
||||
|
||||
with patch.object(n, '_connect', return_value="Connection failed code: 1\nrefused"):
|
||||
output = n.run(["show version"])
|
||||
|
||||
assert n.status == 1
|
||||
assert "refused" in output
|
||||
|
||||
def test_run_with_variables(self, mock_pexpect):
|
||||
"""Variables get substituted in commands."""
|
||||
child = mock_pexpect["child"]
|
||||
child.expect.return_value = 9
|
||||
|
||||
from connpy.core import node
|
||||
n = node("router1", "10.0.0.1", user="admin", password="")
|
||||
|
||||
sent_commands = []
|
||||
child.sendline.side_effect = lambda cmd: sent_commands.append(cmd)
|
||||
|
||||
with patch.object(n, '_connect', return_value=True):
|
||||
n.child = child
|
||||
n.mylog = io.BytesIO(b"output")
|
||||
with patch.object(n, '_logclean', return_value="output"):
|
||||
n.run(["show ip route {subnet}"], vars={"subnet": "10.0.0.0/24"})
|
||||
|
||||
assert "show ip route 10.0.0.0/24" in sent_commands
|
||||
|
||||
def test_run_saves_to_folder(self, mock_pexpect, tmp_path):
|
||||
"""folder param saves log file."""
|
||||
child = mock_pexpect["child"]
|
||||
child.expect.return_value = 9
|
||||
|
||||
from connpy.core import node
|
||||
n = node("router1", "10.0.0.1", user="admin", password="")
|
||||
|
||||
with patch.object(n, '_connect', return_value=True):
|
||||
n.child = child
|
||||
n.mylog = io.BytesIO(b"log output")
|
||||
with patch.object(n, '_logclean', return_value="log output"):
|
||||
n.run(["show version"], folder=str(tmp_path))
|
||||
|
||||
log_files = list(tmp_path.glob("router1_*.txt"))
|
||||
assert len(log_files) == 1
|
||||
assert "log output" in log_files[0].read_text()
|
||||
|
||||
|
||||
class TestNodeTest:
|
||||
def test_test_returns_dict(self, mock_pexpect):
|
||||
"""test() returns dict of results."""
|
||||
child = mock_pexpect["child"]
|
||||
child.expect.return_value = 0 # prompt found (index 0 in test expects)
|
||||
|
||||
from connpy.core import node
|
||||
n = node("router1", "10.0.0.1", user="admin", password="")
|
||||
|
||||
with patch.object(n, '_connect', return_value=True):
|
||||
n.child = child
|
||||
n.mylog = io.BytesIO(b"1.1.1.1 is up")
|
||||
with patch.object(n, '_logclean', return_value="1.1.1.1 is up"):
|
||||
result = n.test(["ping 1.1.1.1"], "1.1.1.1")
|
||||
|
||||
assert isinstance(result, dict)
|
||||
assert result.get("1.1.1.1") == True
|
||||
|
||||
def test_test_expected_not_found(self, mock_pexpect):
|
||||
"""Expected text not found returns False."""
|
||||
child = mock_pexpect["child"]
|
||||
child.expect.return_value = 0
|
||||
|
||||
from connpy.core import node
|
||||
n = node("router1", "10.0.0.1", user="admin", password="")
|
||||
|
||||
with patch.object(n, '_connect', return_value=True):
|
||||
n.child = child
|
||||
n.mylog = io.BytesIO(b"some other output")
|
||||
with patch.object(n, '_logclean', return_value="some other output"):
|
||||
result = n.test(["ping 1.1.1.1"], "1.1.1.1")
|
||||
|
||||
assert isinstance(result, dict)
|
||||
assert result.get("1.1.1.1") == False
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# nodes (parallel) tests
|
||||
# =========================================================================
|
||||
|
||||
class TestNodes:
|
||||
def test_nodes_init(self):
|
||||
"""Creates list of node objects."""
|
||||
from connpy.core import nodes
|
||||
nodes_dict = {
|
||||
"r1": {"host": "10.0.0.1", "user": "admin", "password": ""},
|
||||
"r2": {"host": "10.0.0.2", "user": "admin", "password": ""}
|
||||
}
|
||||
mynodes = nodes(nodes_dict)
|
||||
assert len(mynodes.nodelist) == 2
|
||||
assert hasattr(mynodes, "r1")
|
||||
assert hasattr(mynodes, "r2")
|
||||
|
||||
def test_nodes_run_parallel(self):
|
||||
"""run() executes on all nodes and returns dict."""
|
||||
from connpy.core import nodes
|
||||
|
||||
nodes_dict = {
|
||||
"r1": {"host": "10.0.0.1", "user": "admin", "password": ""},
|
||||
"r2": {"host": "10.0.0.2", "user": "admin", "password": ""}
|
||||
}
|
||||
mynodes = nodes(nodes_dict)
|
||||
|
||||
# Mock run on each node — must set output AND status on the node
|
||||
for n in mynodes.nodelist:
|
||||
original_node = n # capture by value
|
||||
def make_mock(node_ref):
|
||||
def mock_run(commands, **kwargs):
|
||||
node_ref.output = f"output from {node_ref.unique}"
|
||||
node_ref.status = 0
|
||||
return mock_run
|
||||
n.run = make_mock(n)
|
||||
|
||||
result = mynodes.run(["show version"])
|
||||
assert "r1" in result
|
||||
assert "r2" in result
|
||||
|
||||
def test_nodes_splitlist(self):
|
||||
"""_splitlist divides list correctly."""
|
||||
from connpy.core import nodes
|
||||
mynodes = nodes({"r1": {"host": "1.1.1.1", "user": "", "password": ""}})
|
||||
chunks = list(mynodes._splitlist([1, 2, 3, 4, 5], 2))
|
||||
assert chunks == [[1, 2], [3, 4], [5]]
|
||||
|
||||
def test_nodes_run_with_vars(self):
|
||||
"""Variables per node and __global__ work."""
|
||||
from connpy.core import nodes
|
||||
|
||||
nodes_dict = {
|
||||
"r1": {"host": "10.0.0.1", "user": "admin", "password": ""},
|
||||
}
|
||||
mynodes = nodes(nodes_dict)
|
||||
|
||||
captured_vars = {}
|
||||
|
||||
def mock_run(commands, vars=None, **kwargs):
|
||||
captured_vars.update(vars or {})
|
||||
mynodes.r1.output = "ok"
|
||||
mynodes.r1.status = 0
|
||||
|
||||
mynodes.r1.run = mock_run
|
||||
|
||||
variables = {
|
||||
"__global__": {"mask": "255.255.255.0"},
|
||||
"r1": {"ip": "10.0.0.1"}
|
||||
}
|
||||
mynodes.run(["show ip"], vars=variables)
|
||||
assert captured_vars.get("mask") == "255.255.255.0"
|
||||
assert captured_vars.get("ip") == "10.0.0.1"
|
||||
|
||||
def test_nodes_on_complete_callback(self):
|
||||
"""on_complete callback fires per node."""
|
||||
from connpy.core import nodes
|
||||
|
||||
nodes_dict = {
|
||||
"r1": {"host": "10.0.0.1", "user": "admin", "password": ""},
|
||||
}
|
||||
mynodes = nodes(nodes_dict)
|
||||
|
||||
completed = []
|
||||
|
||||
def mock_run(commands, **kwargs):
|
||||
mynodes.r1.output = "done"
|
||||
mynodes.r1.status = 0
|
||||
|
||||
mynodes.r1.run = mock_run
|
||||
|
||||
def on_done(unique, output, status):
|
||||
completed.append(unique)
|
||||
|
||||
mynodes.run(["show version"], on_complete=on_done)
|
||||
assert "r1" in completed
|
||||
@@ -1,55 +0,0 @@
|
||||
import pytest
|
||||
from unittest.mock import MagicMock, patch
|
||||
from connpy.services.execution_service import ExecutionService
|
||||
|
||||
def test_run_commands_callback(populated_config):
|
||||
"""Test that run_commands correctly passes on_node_complete to the executor."""
|
||||
service = ExecutionService(populated_config)
|
||||
|
||||
# Mock the Nodes class in connpy.services.execution_service
|
||||
with patch("connpy.services.execution_service.Nodes") as MockNodes:
|
||||
mock_executor = MockNodes.return_value
|
||||
mock_executor.run.return_value = {"router1": "output"}
|
||||
|
||||
callback = MagicMock()
|
||||
|
||||
service.run_commands(
|
||||
nodes_filter="router1",
|
||||
commands=["show version"],
|
||||
on_node_complete=callback
|
||||
)
|
||||
|
||||
# Verify executor.run was called with on_complete=callback
|
||||
# Note: ExecutionService calls executor.run(..., on_complete=on_node_complete, ...)
|
||||
MockNodes.return_value.run.assert_called_once()
|
||||
args, kwargs = MockNodes.return_value.run.call_args
|
||||
assert kwargs["on_complete"] == callback
|
||||
|
||||
def test_test_commands_callback_regression(populated_config):
|
||||
"""
|
||||
Test that test_commands correctly passes on_node_complete to the executor.
|
||||
Regression: ExecutionService.test_commands currently ignores on_node_complete.
|
||||
"""
|
||||
service = ExecutionService(populated_config)
|
||||
|
||||
with patch("connpy.services.execution_service.Nodes") as MockNodes:
|
||||
mock_executor = MockNodes.return_value
|
||||
mock_executor.test.return_value = {"router1": {"PASS": True}}
|
||||
|
||||
callback = MagicMock()
|
||||
|
||||
service.test_commands(
|
||||
nodes_filter="router1",
|
||||
commands=["show version"],
|
||||
expected=["12.4"],
|
||||
on_node_complete=callback
|
||||
)
|
||||
|
||||
# This is expected to FAIL because ExecutionService.test_commands
|
||||
# doesn't pass on_complete to executor.test
|
||||
MockNodes.return_value.test.assert_called_once()
|
||||
args, kwargs = MockNodes.return_value.test.call_args
|
||||
|
||||
# We expect 'on_complete' to be in kwargs and equal to our callback
|
||||
assert "on_complete" in kwargs, "on_complete parameter missing in call to executor.test"
|
||||
assert kwargs["on_complete"] == callback
|
||||
@@ -1,202 +0,0 @@
|
||||
import pytest
|
||||
import grpc
|
||||
import json
|
||||
import os
|
||||
import threading
|
||||
from unittest.mock import MagicMock, patch
|
||||
from concurrent import futures
|
||||
from connpy.grpc_layer import server, connpy_pb2, connpy_pb2_grpc, stubs
|
||||
from connpy.services.exceptions import ConnpyError
|
||||
|
||||
class MockContext:
|
||||
def abort(self, code, details):
|
||||
raise Exception(f"gRPC Abort: {code} - {details}")
|
||||
|
||||
# --- UNIT TESTS (with mocks) ---
|
||||
|
||||
class TestNodeServicerNaming:
|
||||
@pytest.fixture
|
||||
def servicer(self, populated_config):
|
||||
return server.NodeServicer(populated_config)
|
||||
|
||||
@patch("connpy.core.node")
|
||||
def test_interact_node_uses_passed_name(self, mock_node, servicer):
|
||||
# Setup request with custom name
|
||||
params = {"name": "custom-node-name@test", "host": "1.2.3.4", "protocol": "ssh"}
|
||||
request = connpy_pb2.InteractRequest(
|
||||
id="dynamic",
|
||||
connection_params_json=json.dumps(params)
|
||||
)
|
||||
|
||||
# Mock node to allow _connect
|
||||
mock_node_instance = MagicMock()
|
||||
mock_node_instance._connect.return_value = True
|
||||
mock_node.return_value = mock_node_instance
|
||||
|
||||
# We only need the first iteration of the generator to check naming
|
||||
gen = servicer.interact_node(iter([request]), MockContext())
|
||||
next(gen) # Skip the success response
|
||||
|
||||
# Verify that node() was called with the custom name
|
||||
mock_node.assert_called()
|
||||
found = False
|
||||
for call in mock_node.call_args_list:
|
||||
if call.args[0] == "custom-node-name@test":
|
||||
found = True
|
||||
break
|
||||
assert found
|
||||
|
||||
@patch("connpy.core.node")
|
||||
def test_interact_node_fallback_naming(self, mock_node, servicer):
|
||||
# Setup request without custom name but with host
|
||||
params = {"host": "my-instance", "protocol": "ssm"}
|
||||
request = connpy_pb2.InteractRequest(
|
||||
id="dynamic",
|
||||
connection_params_json=json.dumps(params)
|
||||
)
|
||||
|
||||
mock_node_instance = MagicMock()
|
||||
mock_node_instance._connect.return_value = True
|
||||
mock_node.return_value = mock_node_instance
|
||||
|
||||
gen = servicer.interact_node(iter([request]), MockContext())
|
||||
next(gen)
|
||||
|
||||
# Verify fallback name: dynamic-{host}@remote
|
||||
found = False
|
||||
for call in mock_node.call_args_list:
|
||||
if call.args[0] == "dynamic-my-instance@remote":
|
||||
found = True
|
||||
break
|
||||
assert found
|
||||
|
||||
class TestStubsMessageFormatting:
|
||||
@patch("termios.tcsetattr")
|
||||
@patch("termios.tcgetattr")
|
||||
@patch("tty.setraw")
|
||||
@patch("os.read")
|
||||
@patch("select.select")
|
||||
def test_connect_dynamic_msg_formatting_ssm(self, mock_select, mock_read, mock_setraw, mock_getattr, mock_setattr):
|
||||
from connpy.grpc_layer.stubs import NodeStub
|
||||
|
||||
mock_getattr.return_value = [0, 0, 0, 0, 0, 0, [0] * 32]
|
||||
mock_channel = MagicMock()
|
||||
stub = NodeStub(mock_channel, "localhost:8048")
|
||||
|
||||
mock_resp = MagicMock()
|
||||
mock_resp.success = True
|
||||
mock_resp.stdout_data = b''
|
||||
stub.stub.interact_node.return_value = iter([mock_resp])
|
||||
with patch("connpy.printer.success") as mock_success:
|
||||
with patch("sys.stdin.fileno", return_value=0):
|
||||
mock_select.return_value = ([], [], [])
|
||||
params = {"protocol": "ssm", "host": "i-12345", "name": "my-ssm-node@aws"}
|
||||
|
||||
with patch("select.select", side_effect=KeyboardInterrupt):
|
||||
try:
|
||||
stub.connect_dynamic(params)
|
||||
except KeyboardInterrupt:
|
||||
pass
|
||||
|
||||
mock_success.assert_called()
|
||||
msg = mock_success.call_args[0][0]
|
||||
assert "Connected to my-ssm-node@aws" in msg
|
||||
assert "at i-12345" in msg
|
||||
assert ":22" not in msg
|
||||
assert "via: ssm" in msg
|
||||
|
||||
|
||||
# --- INTEGRATION TESTS (Real Server/Stub Communication) ---
|
||||
|
||||
class TestGRPCIntegration:
|
||||
@pytest.fixture
|
||||
def grpc_server(self, populated_config):
|
||||
"""Starts a local gRPC server for integration testing."""
|
||||
srv = grpc.server(futures.ThreadPoolExecutor(max_workers=5))
|
||||
|
||||
# Register services
|
||||
connpy_pb2_grpc.add_NodeServiceServicer_to_server(server.NodeServicer(populated_config), srv)
|
||||
connpy_pb2_grpc.add_ProfileServiceServicer_to_server(server.ProfileServicer(populated_config), srv)
|
||||
connpy_pb2_grpc.add_ConfigServiceServicer_to_server(server.ConfigServicer(populated_config), srv)
|
||||
connpy_pb2_grpc.add_ExecutionServiceServicer_to_server(server.ExecutionServicer(populated_config), srv)
|
||||
connpy_pb2_grpc.add_ImportExportServiceServicer_to_server(server.ImportExportServicer(populated_config), srv)
|
||||
|
||||
port = srv.add_insecure_port('127.0.0.1:0')
|
||||
srv.start()
|
||||
yield f"127.0.0.1:{port}"
|
||||
srv.stop(0)
|
||||
|
||||
@pytest.fixture
|
||||
def channel(self, grpc_server):
|
||||
with grpc.insecure_channel(grpc_server) as channel:
|
||||
yield channel
|
||||
|
||||
@pytest.fixture
|
||||
def node_stub(self, channel):
|
||||
return stubs.NodeStub(channel, "localhost")
|
||||
|
||||
@pytest.fixture
|
||||
def profile_stub(self, channel):
|
||||
return stubs.ProfileStub(channel, "localhost")
|
||||
|
||||
@pytest.fixture
|
||||
def config_stub(self, channel):
|
||||
return stubs.ConfigStub(channel, "localhost")
|
||||
|
||||
def test_list_nodes_integration(self, node_stub):
|
||||
nodes = node_stub.list_nodes()
|
||||
assert "router1" in nodes
|
||||
assert "server1@office" in nodes
|
||||
|
||||
def test_get_node_details_integration(self, node_stub):
|
||||
details = node_stub.get_node_details("router1")
|
||||
assert details["host"] == "10.0.0.1"
|
||||
|
||||
def test_node_not_found_integration(self, node_stub):
|
||||
with pytest.raises(ConnpyError) as exc:
|
||||
node_stub.get_node_details("non-existent")
|
||||
assert "Node 'non-existent' not found." in str(exc.value)
|
||||
|
||||
def test_list_profiles_integration(self, profile_stub):
|
||||
profiles = profile_stub.list_profiles()
|
||||
assert "office-user" in profiles
|
||||
|
||||
def test_get_settings_integration(self, config_stub):
|
||||
settings = config_stub.get_settings()
|
||||
assert "idletime" in settings
|
||||
|
||||
def test_update_setting_integration(self, config_stub):
|
||||
config_stub.update_setting("idletime", 99)
|
||||
settings = config_stub.get_settings()
|
||||
assert settings["idletime"] == 99
|
||||
|
||||
def test_add_delete_node_integration(self, node_stub):
|
||||
node_stub.add_node("integration-test-node", {"host": "9.9.9.9"})
|
||||
assert "integration-test-node" in node_stub.list_nodes()
|
||||
node_stub.delete_node("integration-test-node")
|
||||
assert "integration-test-node" not in node_stub.list_nodes()
|
||||
|
||||
def test_import_yaml_integration(self, channel, node_stub):
|
||||
import yaml
|
||||
from connpy.grpc_layer import stubs
|
||||
stub = stubs.ImportExportStub(channel, "localhost")
|
||||
|
||||
# ImportExportService expects a flat dict of nodes, not a full config structure
|
||||
inventory = {
|
||||
"imported-node": {"host": "8.8.8.8", "protocol": "ssh", "type": "connection"}
|
||||
}
|
||||
yaml_content = yaml.dump(inventory)
|
||||
|
||||
import tempfile
|
||||
with tempfile.NamedTemporaryFile(mode="w", suffix=".yaml", delete=False) as f:
|
||||
f.write(yaml_content)
|
||||
temp_path = f.name
|
||||
|
||||
try:
|
||||
stub.import_from_file(temp_path)
|
||||
# Verify the node was imported and is visible via NodeStub
|
||||
nodes = node_stub.list_nodes()
|
||||
assert "imported-node" in nodes
|
||||
finally:
|
||||
if os.path.exists(temp_path):
|
||||
os.remove(temp_path)
|
||||
@@ -1,216 +0,0 @@
|
||||
"""Tests for connpy.hooks module — MethodHook and ClassHook."""
|
||||
import pytest
|
||||
from connpy.hooks import MethodHook, ClassHook
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# MethodHook Tests
|
||||
# =========================================================================
|
||||
|
||||
class TestMethodHook:
|
||||
def test_basic_call(self):
|
||||
"""Decorated function executes normally."""
|
||||
@MethodHook
|
||||
def add(a, b):
|
||||
return a + b
|
||||
assert add(2, 3) == 5
|
||||
|
||||
def test_pre_hook_modifies_args(self):
|
||||
"""Pre-hook can modify arguments before execution."""
|
||||
@MethodHook
|
||||
def greet(name):
|
||||
return f"Hello {name}"
|
||||
|
||||
def uppercase_hook(name):
|
||||
return (name.upper(),), {}
|
||||
|
||||
greet.register_pre_hook(uppercase_hook)
|
||||
assert greet("world") == "Hello WORLD"
|
||||
|
||||
def test_post_hook_modifies_result(self):
|
||||
"""Post-hook can modify the return value."""
|
||||
@MethodHook
|
||||
def compute(x):
|
||||
return x * 2
|
||||
|
||||
def double_result(*args, **kwargs):
|
||||
return kwargs["result"] * 2
|
||||
|
||||
compute.register_post_hook(double_result)
|
||||
assert compute(5) == 20 # 5*2=10, then 10*2=20
|
||||
|
||||
def test_multiple_pre_hooks_order(self):
|
||||
"""Pre-hooks execute in registration order."""
|
||||
calls = []
|
||||
|
||||
@MethodHook
|
||||
def func(x):
|
||||
return x
|
||||
|
||||
def hook1(x):
|
||||
calls.append("hook1")
|
||||
return (x,), {}
|
||||
|
||||
def hook2(x):
|
||||
calls.append("hook2")
|
||||
return (x,), {}
|
||||
|
||||
func.register_pre_hook(hook1)
|
||||
func.register_pre_hook(hook2)
|
||||
func(1)
|
||||
assert calls == ["hook1", "hook2"]
|
||||
|
||||
def test_multiple_post_hooks_order(self):
|
||||
"""Post-hooks execute in registration order."""
|
||||
calls = []
|
||||
|
||||
@MethodHook
|
||||
def func(x):
|
||||
return x
|
||||
|
||||
def hook1(*args, **kwargs):
|
||||
calls.append("hook1")
|
||||
return kwargs["result"]
|
||||
|
||||
def hook2(*args, **kwargs):
|
||||
calls.append("hook2")
|
||||
return kwargs["result"]
|
||||
|
||||
func.register_post_hook(hook1)
|
||||
func.register_post_hook(hook2)
|
||||
func(1)
|
||||
assert calls == ["hook1", "hook2"]
|
||||
|
||||
def test_pre_hook_exception_continues(self, capsys):
|
||||
"""If a pre-hook raises, the function still executes."""
|
||||
@MethodHook
|
||||
def func(x):
|
||||
return x + 1
|
||||
|
||||
def bad_hook(x):
|
||||
raise RuntimeError("broken hook")
|
||||
|
||||
func.register_pre_hook(bad_hook)
|
||||
# Should not raise — the hook error is printed but execution continues
|
||||
result = func(5)
|
||||
assert result == 6
|
||||
|
||||
def test_post_hook_exception_continues(self, capsys):
|
||||
"""If a post-hook raises, the result is still returned."""
|
||||
@MethodHook
|
||||
def func(x):
|
||||
return x + 1
|
||||
|
||||
def bad_hook(*args, **kwargs):
|
||||
raise RuntimeError("broken post hook")
|
||||
|
||||
func.register_post_hook(bad_hook)
|
||||
result = func(5)
|
||||
assert result == 6
|
||||
|
||||
def test_method_hook_as_instance_method(self):
|
||||
"""MethodHook works as a descriptor on a class."""
|
||||
class MyClass:
|
||||
@MethodHook
|
||||
def double(self, x):
|
||||
return x * 2
|
||||
|
||||
obj = MyClass()
|
||||
assert obj.double(5) == 10
|
||||
|
||||
def test_method_hook_instance_hook_registration(self):
|
||||
"""Can register hooks via instance method access."""
|
||||
class MyClass:
|
||||
@MethodHook
|
||||
def process(self, x):
|
||||
return x
|
||||
|
||||
def add_ten(*args, **kwargs):
|
||||
return kwargs["result"] + 10
|
||||
|
||||
obj = MyClass()
|
||||
obj.process.register_post_hook(add_ten)
|
||||
assert obj.process(5) == 15
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# ClassHook Tests
|
||||
# =========================================================================
|
||||
|
||||
class TestClassHook:
|
||||
def test_creates_instance(self):
|
||||
"""ClassHook still creates instances normally."""
|
||||
@ClassHook
|
||||
class MyClass:
|
||||
def __init__(self, value):
|
||||
self.value = value
|
||||
|
||||
obj = MyClass(42)
|
||||
assert obj.value == 42
|
||||
|
||||
def test_modify_future_instances(self):
|
||||
"""modify() affects all future instances."""
|
||||
@ClassHook
|
||||
class MyClass:
|
||||
def __init__(self):
|
||||
self.x = 1
|
||||
|
||||
def set_x_to_99(instance):
|
||||
instance.x = 99
|
||||
|
||||
MyClass.modify(set_x_to_99)
|
||||
obj = MyClass()
|
||||
assert obj.x == 99
|
||||
|
||||
def test_modify_does_not_affect_past(self):
|
||||
"""modify() does not affect already-created instances."""
|
||||
@ClassHook
|
||||
class MyClass:
|
||||
def __init__(self):
|
||||
self.x = 1
|
||||
|
||||
old_obj = MyClass()
|
||||
|
||||
def set_x_to_99(instance):
|
||||
instance.x = 99
|
||||
|
||||
MyClass.modify(set_x_to_99)
|
||||
assert old_obj.x == 1 # Not affected
|
||||
assert MyClass().x == 99 # New instance IS affected
|
||||
|
||||
def test_instance_modify(self):
|
||||
"""instance.modify() only affects that specific instance."""
|
||||
@ClassHook
|
||||
class MyClass:
|
||||
def __init__(self):
|
||||
self.x = 1
|
||||
|
||||
obj1 = MyClass()
|
||||
obj2 = MyClass()
|
||||
|
||||
obj1.modify(lambda inst: setattr(inst, 'x', 999))
|
||||
assert obj1.x == 999
|
||||
assert obj2.x == 1
|
||||
|
||||
def test_multiple_deferred_hooks(self):
|
||||
"""Multiple modify() calls apply in order."""
|
||||
@ClassHook
|
||||
class MyClass:
|
||||
def __init__(self):
|
||||
self.log = []
|
||||
|
||||
MyClass.modify(lambda inst: inst.log.append("first"))
|
||||
MyClass.modify(lambda inst: inst.log.append("second"))
|
||||
|
||||
obj = MyClass()
|
||||
assert obj.log == ["first", "second"]
|
||||
|
||||
def test_getattr_delegation(self):
|
||||
"""ClassHook delegates attribute access to the wrapped class."""
|
||||
@ClassHook
|
||||
class MyClass:
|
||||
class_var = "hello"
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
assert MyClass.class_var == "hello"
|
||||
@@ -1,66 +0,0 @@
|
||||
import pytest
|
||||
from connpy.services.node_service import NodeService
|
||||
from connpy.services.exceptions import NodeNotFoundError, NodeAlreadyExistsError
|
||||
|
||||
def test_list_nodes_filtering_parity(populated_config):
|
||||
"""
|
||||
Test that list_nodes uses literal 'in' logic instead of re.search.
|
||||
Regression: NodeService currently uses re.search in some versions,
|
||||
but we want to ensure it uses literal 'in' for parity.
|
||||
"""
|
||||
service = NodeService(populated_config)
|
||||
|
||||
# If it uses 'in' logic, '1' should match all nodes containing '1'
|
||||
# router1, server1@office, db1@datacenter@office
|
||||
nodes = service.list_nodes(filter_str="1")
|
||||
assert len(nodes) == 3
|
||||
assert "router1" in nodes
|
||||
assert "server1@office" in nodes
|
||||
assert "db1@datacenter@office" in nodes
|
||||
|
||||
# Test regex-specific characters.
|
||||
# NodeService should use re.search, so '^router' will match 'router1'.
|
||||
nodes_regex = service.list_nodes(filter_str="^router")
|
||||
|
||||
assert "router1" in nodes_regex
|
||||
|
||||
def test_list_nodes_dynamic_formatting(populated_config):
|
||||
"""
|
||||
Test that list_nodes supports dynamic formatting for any node attribute.
|
||||
Regression: NodeService currently has hardcoded support for name, location, host.
|
||||
"""
|
||||
service = NodeService(populated_config)
|
||||
|
||||
# Try to format using 'user' and 'protocol' which are NOT in the hardcoded list
|
||||
# (name, location, host)
|
||||
format_str = "{name} -> {user}@{host} ({protocol})"
|
||||
|
||||
# router1: host=10.0.0.1, user=admin, protocol=ssh
|
||||
# Expected: "router1 -> admin@10.0.0.1 (ssh)"
|
||||
|
||||
formatted = service.list_nodes(filter_str="router1", format_str=format_str)
|
||||
|
||||
assert len(formatted) == 1
|
||||
# This will FAIL if it only supports {name}, {location}, {host}
|
||||
assert formatted[0] == "router1 -> admin@10.0.0.1 (ssh)"
|
||||
|
||||
def test_node_editing_parity(populated_config):
|
||||
"""
|
||||
Test that add_node improperly raises NodeAlreadyExistsError when used for editing.
|
||||
Regression: connapp._mod calls add_node instead of update_node.
|
||||
"""
|
||||
service = NodeService(populated_config)
|
||||
|
||||
# router1 already exists in populated_config
|
||||
# We confirm that calling add_node with an existing ID raises NodeAlreadyExistsError
|
||||
# which is why connapp._mod (which calls add_node) is currently broken for editing.
|
||||
with pytest.raises(NodeAlreadyExistsError):
|
||||
service.add_node("router1", {"host": "1.1.1.1"})
|
||||
|
||||
def test_list_nodes_case_sensitivity(populated_config):
|
||||
"""Test that filtering respects the case setting in config."""
|
||||
service = NodeService(populated_config)
|
||||
|
||||
# Default case is False (case-insensitive)
|
||||
nodes = service.list_nodes(filter_str="ROUTER")
|
||||
assert "router1" in nodes
|
||||
@@ -1,327 +0,0 @@
|
||||
"""Tests for connpy.plugins module."""
|
||||
import os
|
||||
import textwrap
|
||||
import pytest
|
||||
from connpy.plugins import Plugins
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helper: write a plugin script to a file
|
||||
# ---------------------------------------------------------------------------
|
||||
def _write_plugin(path, code):
|
||||
"""Write dedented code to a file."""
|
||||
with open(path, "w") as f:
|
||||
f.write(textwrap.dedent(code))
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# verify_script tests
|
||||
# =========================================================================
|
||||
|
||||
class TestVerifyScript:
|
||||
def test_valid_parser_entrypoint(self, tmp_path):
|
||||
p = tmp_path / "good.py"
|
||||
_write_plugin(p, """\
|
||||
import argparse
|
||||
|
||||
class Parser:
|
||||
def __init__(self):
|
||||
self.parser = argparse.ArgumentParser()
|
||||
|
||||
class Entrypoint:
|
||||
def __init__(self, args, parser, connapp):
|
||||
pass
|
||||
""")
|
||||
plugins = Plugins()
|
||||
assert plugins.verify_script(str(p)) == False
|
||||
|
||||
def test_valid_preload_only(self, tmp_path):
|
||||
p = tmp_path / "preload.py"
|
||||
_write_plugin(p, """\
|
||||
class Preload:
|
||||
def __init__(self, connapp):
|
||||
pass
|
||||
""")
|
||||
plugins = Plugins()
|
||||
assert plugins.verify_script(str(p)) == False
|
||||
|
||||
def test_valid_all_three(self, tmp_path):
|
||||
p = tmp_path / "all.py"
|
||||
_write_plugin(p, """\
|
||||
import argparse
|
||||
|
||||
class Parser:
|
||||
def __init__(self):
|
||||
self.parser = argparse.ArgumentParser()
|
||||
|
||||
class Entrypoint:
|
||||
def __init__(self, args, parser, connapp):
|
||||
pass
|
||||
|
||||
class Preload:
|
||||
def __init__(self, connapp):
|
||||
pass
|
||||
""")
|
||||
plugins = Plugins()
|
||||
assert plugins.verify_script(str(p)) == False
|
||||
|
||||
def test_parser_without_entrypoint(self, tmp_path):
|
||||
p = tmp_path / "bad.py"
|
||||
_write_plugin(p, """\
|
||||
import argparse
|
||||
|
||||
class Parser:
|
||||
def __init__(self):
|
||||
self.parser = argparse.ArgumentParser()
|
||||
""")
|
||||
plugins = Plugins()
|
||||
result = plugins.verify_script(str(p))
|
||||
assert result # Should be a truthy error string
|
||||
assert "Entrypoint" in result
|
||||
|
||||
def test_entrypoint_without_parser(self, tmp_path):
|
||||
p = tmp_path / "bad.py"
|
||||
_write_plugin(p, """\
|
||||
class Entrypoint:
|
||||
def __init__(self, args, parser, connapp):
|
||||
pass
|
||||
""")
|
||||
plugins = Plugins()
|
||||
result = plugins.verify_script(str(p))
|
||||
assert result
|
||||
assert "Parser" in result
|
||||
|
||||
def test_no_valid_class(self, tmp_path):
|
||||
p = tmp_path / "empty.py"
|
||||
_write_plugin(p, """\
|
||||
def some_function():
|
||||
pass
|
||||
""")
|
||||
plugins = Plugins()
|
||||
result = plugins.verify_script(str(p))
|
||||
assert result
|
||||
assert "No valid class" in result
|
||||
|
||||
def test_parser_missing_self_parser(self, tmp_path):
|
||||
p = tmp_path / "bad.py"
|
||||
_write_plugin(p, """\
|
||||
class Parser:
|
||||
def __init__(self):
|
||||
self.something = "not parser"
|
||||
|
||||
class Entrypoint:
|
||||
def __init__(self, args, parser, connapp):
|
||||
pass
|
||||
""")
|
||||
plugins = Plugins()
|
||||
result = plugins.verify_script(str(p))
|
||||
assert result
|
||||
assert "self.parser" in result
|
||||
|
||||
def test_entrypoint_wrong_args(self, tmp_path):
|
||||
p = tmp_path / "bad.py"
|
||||
_write_plugin(p, """\
|
||||
import argparse
|
||||
|
||||
class Parser:
|
||||
def __init__(self):
|
||||
self.parser = argparse.ArgumentParser()
|
||||
|
||||
class Entrypoint:
|
||||
def __init__(self, args):
|
||||
pass
|
||||
""")
|
||||
plugins = Plugins()
|
||||
result = plugins.verify_script(str(p))
|
||||
assert result
|
||||
assert "Entrypoint" in result
|
||||
|
||||
def test_preload_wrong_args(self, tmp_path):
|
||||
p = tmp_path / "bad.py"
|
||||
_write_plugin(p, """\
|
||||
class Preload:
|
||||
def __init__(self, connapp, extra):
|
||||
pass
|
||||
""")
|
||||
plugins = Plugins()
|
||||
result = plugins.verify_script(str(p))
|
||||
assert result
|
||||
assert "Preload" in result
|
||||
|
||||
def test_disallowed_top_level(self, tmp_path):
|
||||
p = tmp_path / "bad.py"
|
||||
_write_plugin(p, """\
|
||||
MY_GLOBAL = "not allowed"
|
||||
|
||||
class Preload:
|
||||
def __init__(self, connapp):
|
||||
pass
|
||||
""")
|
||||
plugins = Plugins()
|
||||
result = plugins.verify_script(str(p))
|
||||
assert result
|
||||
assert "not allowed" in result.lower() or "Plugin can only have" in result
|
||||
|
||||
def test_syntax_error(self, tmp_path):
|
||||
p = tmp_path / "bad.py"
|
||||
_write_plugin(p, """\
|
||||
def broken(
|
||||
""")
|
||||
plugins = Plugins()
|
||||
result = plugins.verify_script(str(p))
|
||||
assert result
|
||||
assert "Syntax error" in result
|
||||
|
||||
def test_if_name_main_allowed(self, tmp_path):
|
||||
p = tmp_path / "good.py"
|
||||
_write_plugin(p, """\
|
||||
class Preload:
|
||||
def __init__(self, connapp):
|
||||
pass
|
||||
|
||||
if __name__ == "__main__":
|
||||
print("standalone")
|
||||
""")
|
||||
plugins = Plugins()
|
||||
assert plugins.verify_script(str(p)) == False
|
||||
|
||||
def test_other_if_not_allowed(self, tmp_path):
|
||||
p = tmp_path / "bad.py"
|
||||
_write_plugin(p, """\
|
||||
import sys
|
||||
|
||||
if sys.platform == "linux":
|
||||
pass
|
||||
|
||||
class Preload:
|
||||
def __init__(self, connapp):
|
||||
pass
|
||||
""")
|
||||
plugins = Plugins()
|
||||
result = plugins.verify_script(str(p))
|
||||
assert result
|
||||
assert "__name__" in result
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# Import and loading tests
|
||||
# =========================================================================
|
||||
|
||||
class TestPluginLoading:
|
||||
def test_import_from_path(self, tmp_path):
|
||||
p = tmp_path / "mymod.py"
|
||||
_write_plugin(p, """\
|
||||
MY_VAR = 42
|
||||
""")
|
||||
plugins = Plugins()
|
||||
module = plugins._import_from_path(str(p))
|
||||
assert module.MY_VAR == 42
|
||||
|
||||
def test_import_plugins_to_argparse(self, tmp_path):
|
||||
"""Valid plugins get loaded into argparse."""
|
||||
import argparse
|
||||
|
||||
plugin_dir = tmp_path / "plugins"
|
||||
plugin_dir.mkdir()
|
||||
_write_plugin(plugin_dir / "myplugin.py", """\
|
||||
import argparse
|
||||
|
||||
class Parser:
|
||||
def __init__(self):
|
||||
self.parser = argparse.ArgumentParser(description="My plugin")
|
||||
|
||||
class Entrypoint:
|
||||
def __init__(self, args, parser, connapp):
|
||||
pass
|
||||
""")
|
||||
|
||||
parser = argparse.ArgumentParser()
|
||||
subparsers = parser.add_subparsers()
|
||||
|
||||
plugins = Plugins()
|
||||
plugins._import_plugins_to_argparse(str(plugin_dir), subparsers)
|
||||
|
||||
assert "myplugin" in plugins.plugins
|
||||
assert "myplugin" in plugins.plugin_parsers
|
||||
|
||||
def test_plugin_name_collision(self, tmp_path):
|
||||
"""Plugin with same name as existing subcommand is skipped."""
|
||||
import argparse
|
||||
|
||||
plugin_dir = tmp_path / "plugins"
|
||||
plugin_dir.mkdir()
|
||||
_write_plugin(plugin_dir / "existcmd.py", """\
|
||||
import argparse
|
||||
|
||||
class Parser:
|
||||
def __init__(self):
|
||||
self.parser = argparse.ArgumentParser()
|
||||
|
||||
class Entrypoint:
|
||||
def __init__(self, args, parser, connapp):
|
||||
pass
|
||||
""")
|
||||
|
||||
parser = argparse.ArgumentParser()
|
||||
subparsers = parser.add_subparsers()
|
||||
subparsers.add_parser("existcmd") # Already taken
|
||||
|
||||
plugins = Plugins()
|
||||
plugins._import_plugins_to_argparse(str(plugin_dir), subparsers)
|
||||
|
||||
assert "existcmd" not in plugins.plugins
|
||||
|
||||
def test_preload_registration(self, tmp_path):
|
||||
"""Preload class gets registered in preloads dict."""
|
||||
import argparse
|
||||
|
||||
plugin_dir = tmp_path / "plugins"
|
||||
plugin_dir.mkdir()
|
||||
_write_plugin(plugin_dir / "preloader.py", """\
|
||||
class Preload:
|
||||
def __init__(self, connapp):
|
||||
pass
|
||||
""")
|
||||
|
||||
parser = argparse.ArgumentParser()
|
||||
subparsers = parser.add_subparsers()
|
||||
|
||||
plugins = Plugins()
|
||||
plugins._import_plugins_to_argparse(str(plugin_dir), subparsers)
|
||||
|
||||
assert "preloader" in plugins.preloads
|
||||
|
||||
def test_invalid_plugin_skipped(self, tmp_path, capsys):
|
||||
"""Invalid plugin is skipped with error message."""
|
||||
import argparse
|
||||
|
||||
plugin_dir = tmp_path / "plugins"
|
||||
plugin_dir.mkdir()
|
||||
_write_plugin(plugin_dir / "badplugin.py", """\
|
||||
MY_GLOBAL = "bad"
|
||||
""")
|
||||
|
||||
parser = argparse.ArgumentParser()
|
||||
subparsers = parser.add_subparsers()
|
||||
|
||||
plugins = Plugins()
|
||||
plugins._import_plugins_to_argparse(str(plugin_dir), subparsers)
|
||||
|
||||
assert "badplugin" not in plugins.plugins
|
||||
captured = capsys.readouterr()
|
||||
assert "Failed to load plugin" in captured.err or "Failed to load plugin" in captured.out
|
||||
|
||||
def test_empty_directory(self, tmp_path):
|
||||
"""Empty directory doesn't cause errors."""
|
||||
import argparse
|
||||
|
||||
plugin_dir = tmp_path / "plugins"
|
||||
plugin_dir.mkdir()
|
||||
|
||||
parser = argparse.ArgumentParser()
|
||||
subparsers = parser.add_subparsers()
|
||||
|
||||
plugins = Plugins()
|
||||
plugins._import_plugins_to_argparse(str(plugin_dir), subparsers)
|
||||
|
||||
assert len(plugins.plugins) == 0
|
||||
@@ -1,104 +0,0 @@
|
||||
"""Tests for connpy.printer module."""
|
||||
import sys
|
||||
from io import StringIO
|
||||
from connpy import printer
|
||||
|
||||
|
||||
class TestPrinter:
|
||||
def test_info_output(self, capsys):
|
||||
printer.info("hello world")
|
||||
captured = capsys.readouterr()
|
||||
assert "[i] hello world" in captured.out
|
||||
|
||||
def test_success_output(self, capsys):
|
||||
printer.success("done")
|
||||
captured = capsys.readouterr()
|
||||
assert "[✓] done" in captured.out
|
||||
|
||||
def test_warning_output(self, capsys):
|
||||
printer.warning("careful")
|
||||
captured = capsys.readouterr()
|
||||
assert "[!] careful" in captured.out
|
||||
|
||||
def test_error_output(self, capsys):
|
||||
printer.error("failed")
|
||||
captured = capsys.readouterr()
|
||||
assert "[✗] failed" in captured.err
|
||||
|
||||
def test_debug_output(self, capsys):
|
||||
printer.debug("debug info")
|
||||
captured = capsys.readouterr()
|
||||
assert "[d] debug info" in captured.out
|
||||
|
||||
def test_start_output(self, capsys):
|
||||
printer.start("starting")
|
||||
captured = capsys.readouterr()
|
||||
assert "[+] starting" in captured.out
|
||||
|
||||
def test_custom_output(self, capsys):
|
||||
printer.custom("TAG", "custom message")
|
||||
captured = capsys.readouterr()
|
||||
assert "[TAG] custom message" in captured.out
|
||||
|
||||
def test_multiline_indentation(self, capsys):
|
||||
printer.info("line1\nline2\nline3")
|
||||
captured = capsys.readouterr()
|
||||
lines = captured.out.strip().split("\n")
|
||||
assert lines[0] == "[i] line1"
|
||||
# Second line should be indented by len("[i] ") = 4 chars
|
||||
assert lines[1].startswith(" line2")
|
||||
assert lines[2].startswith(" line3")
|
||||
|
||||
def test_data_output(self, capsys):
|
||||
printer.data("my title", "key: value")
|
||||
captured = capsys.readouterr()
|
||||
# Rich output is formatted with ansi escape sequences or box drawing chars
|
||||
# Just check that title and content appear in the output stream
|
||||
assert "my title" in captured.out
|
||||
assert "key" in captured.out
|
||||
|
||||
def test_node_panel_pass(self, capsys):
|
||||
printer.node_panel("node1", "output line\n", 0)
|
||||
captured = capsys.readouterr()
|
||||
assert "node1" in captured.out
|
||||
assert "PASS" in captured.out
|
||||
assert "output line" in captured.out
|
||||
|
||||
def test_node_panel_fail(self, capsys):
|
||||
printer.node_panel("node2", "error line\n", 1)
|
||||
captured = capsys.readouterr()
|
||||
assert "node2" in captured.out
|
||||
assert "FAIL" in captured.out
|
||||
assert "error line" in captured.out
|
||||
|
||||
def test_test_panel(self, capsys):
|
||||
printer.test_panel("node1", "output", 0, {"check1": True, "check2": False})
|
||||
captured = capsys.readouterr()
|
||||
assert "node1" in captured.out
|
||||
assert "check1" in captured.out
|
||||
assert "check2" in captured.out
|
||||
|
||||
def test_test_summary(self, capsys):
|
||||
results = {"node1": {"test1": True}, "node2": {"test2": False}}
|
||||
printer.test_summary(results)
|
||||
captured = capsys.readouterr()
|
||||
assert "node1" in captured.out
|
||||
assert "node2" in captured.out
|
||||
assert "test1" in captured.out
|
||||
assert "test2" in captured.out
|
||||
|
||||
def test_header_output(self, capsys):
|
||||
printer.header("My Header")
|
||||
captured = capsys.readouterr()
|
||||
assert "My Header" in captured.out
|
||||
|
||||
def test_kv_output(self, capsys):
|
||||
printer.kv("mykeystring", "myvaluestring")
|
||||
captured = capsys.readouterr()
|
||||
assert "mykeystring" in captured.out
|
||||
assert "myvaluestring" in captured.out
|
||||
|
||||
def test_confirm_action(self, capsys):
|
||||
printer.confirm_action("router1", "delete")
|
||||
captured = capsys.readouterr()
|
||||
assert "[i] delete: router1" in captured.out
|
||||
@@ -1,65 +0,0 @@
|
||||
import threading
|
||||
import io
|
||||
import time
|
||||
import sys
|
||||
import pytest
|
||||
from connpy import printer
|
||||
|
||||
def test_printer_thread_isolation():
|
||||
"""Verify that printer output is isolated per thread when using set_thread_stream."""
|
||||
num_threads = 5
|
||||
iterations = 20
|
||||
results = {}
|
||||
|
||||
def worker(thread_id):
|
||||
# Create a private buffer for this thread
|
||||
buf = io.StringIO()
|
||||
printer.set_thread_stream(buf)
|
||||
|
||||
# Ensure we have a clean console for this thread
|
||||
# In a real gRPC request, this happens automatically as it's a new thread
|
||||
printer.set_thread_console(None)
|
||||
|
||||
# Each thread prints its own ID
|
||||
expected_msg = f"Thread-{thread_id}"
|
||||
for _ in range(iterations):
|
||||
printer.info(expected_msg)
|
||||
time.sleep(0.01)
|
||||
|
||||
results[thread_id] = buf.getvalue()
|
||||
printer.set_thread_stream(None)
|
||||
|
||||
threads = []
|
||||
for i in range(num_threads):
|
||||
t = threading.Thread(target=worker, args=(i,))
|
||||
threads.append(t)
|
||||
t.start()
|
||||
|
||||
for t in threads:
|
||||
t.join()
|
||||
|
||||
# Validation
|
||||
for thread_id, output in results.items():
|
||||
expected_msg = f"Thread-{thread_id}"
|
||||
assert expected_msg in output
|
||||
|
||||
# Ensure no leaks
|
||||
for other_id in range(num_threads):
|
||||
if other_id == thread_id: continue
|
||||
assert f"Thread-{other_id}" not in output
|
||||
|
||||
def test_printer_manual_stream():
|
||||
"""Verify that setting a thread stream correctly captures printer output in the current thread."""
|
||||
buf = io.StringIO()
|
||||
|
||||
# We must clear the thread-local console to force it to pick up the new sys.stdout proxy
|
||||
printer.set_thread_console(None)
|
||||
printer.set_thread_stream(buf)
|
||||
|
||||
printer.info("Captured-Message")
|
||||
|
||||
output = buf.getvalue()
|
||||
printer.set_thread_stream(None)
|
||||
printer.set_thread_console(None)
|
||||
|
||||
assert "Captured-Message" in output
|
||||
@@ -1,83 +0,0 @@
|
||||
import pytest
|
||||
from connpy.services.profile_service import ProfileService
|
||||
from connpy.services.exceptions import ProfileNotFoundError, ProfileAlreadyExistsError
|
||||
|
||||
def test_profile_crud(populated_config):
|
||||
"""Test basic CRUD operations for profiles."""
|
||||
service = ProfileService(populated_config)
|
||||
|
||||
# List
|
||||
profiles = service.list_profiles()
|
||||
assert "default" in profiles
|
||||
assert "office-user" in profiles
|
||||
|
||||
# Get
|
||||
office = service.get_profile("office-user")
|
||||
assert office["user"] == "officeadmin"
|
||||
|
||||
# Add
|
||||
new_data = {
|
||||
"user": "newadmin",
|
||||
"password": "newpassword"
|
||||
}
|
||||
service.add_profile("new-profile", new_data)
|
||||
assert "new-profile" in service.list_profiles()
|
||||
assert service.get_profile("new-profile")["user"] == "newadmin"
|
||||
|
||||
# Update
|
||||
update_data = {
|
||||
"user": "updatedadmin"
|
||||
}
|
||||
service.update_profile("new-profile", update_data)
|
||||
assert service.get_profile("new-profile")["user"] == "updatedadmin"
|
||||
|
||||
# Delete
|
||||
service.delete_profile("new-profile")
|
||||
assert "new-profile" not in service.list_profiles()
|
||||
|
||||
def test_profile_inheritance_parity(populated_config):
|
||||
"""
|
||||
Test that profiles can inherit from other profiles.
|
||||
Regression: ProfileService currently doesn't resolve inheritance within profiles.
|
||||
"""
|
||||
service = ProfileService(populated_config)
|
||||
|
||||
# Create a profile that inherits from 'office-user'
|
||||
# 'office-user' has user='officeadmin', password='officepass'
|
||||
inherited_data = {
|
||||
"user": "@office-user",
|
||||
"options": "-v"
|
||||
}
|
||||
service.add_profile("inherited-profile", inherited_data)
|
||||
|
||||
# When we get the profile, we expect it to be resolved if inheritance is supported
|
||||
# This is a common pattern in connpy for nodes, but should it work for profiles?
|
||||
# The task mentions "profile CRUD and inheritance parity".
|
||||
|
||||
profile = service.get_profile("inherited-profile")
|
||||
|
||||
# If inheritance is resolved, user should be 'officeadmin'
|
||||
# This is expected to FAIL if ProfileService just returns the raw dict.
|
||||
assert profile["user"] == "officeadmin"
|
||||
assert profile["password"] == "officepass"
|
||||
assert profile["options"] == "-v"
|
||||
|
||||
def test_delete_default_profile_fails(populated_config):
|
||||
"""Test that deleting the 'default' profile is prohibited."""
|
||||
service = ProfileService(populated_config)
|
||||
from connpy.services.exceptions import InvalidConfigurationError
|
||||
|
||||
with pytest.raises(InvalidConfigurationError, match="Cannot delete the 'default' profile"):
|
||||
service.delete_profile("default")
|
||||
|
||||
def test_delete_used_profile_fails(populated_config):
|
||||
"""Test that deleting a profile used by nodes is prohibited."""
|
||||
service = ProfileService(populated_config)
|
||||
from connpy.services.exceptions import InvalidConfigurationError
|
||||
|
||||
# In populated_config, we need to make sure a node uses a profile
|
||||
# Let's add a node that uses 'office-user'
|
||||
populated_config._connections_add(id="testnode", host="1.1.1.1", user="@office-user")
|
||||
|
||||
with pytest.raises(InvalidConfigurationError, match="is used by nodes"):
|
||||
service.delete_profile("office-user")
|
||||
@@ -1,42 +0,0 @@
|
||||
import pytest
|
||||
from unittest.mock import patch, MagicMock
|
||||
from connpy.services.provider import ServiceProvider
|
||||
|
||||
def test_service_provider_local_mode():
|
||||
config_mock = MagicMock()
|
||||
with patch("connpy.services.provider.NodeService", create=True) as MockNodeService, \
|
||||
patch("connpy.services.provider.ProfileService", create=True), \
|
||||
patch("connpy.services.provider.ConfigService", create=True), \
|
||||
patch("connpy.services.provider.PluginService", create=True), \
|
||||
patch("connpy.services.provider.AIService", create=True), \
|
||||
patch("connpy.services.provider.SystemService", create=True), \
|
||||
patch("connpy.services.provider.ExecutionService", create=True), \
|
||||
patch("connpy.services.provider.ImportExportService", create=True):
|
||||
|
||||
provider = ServiceProvider(config_mock, mode="local")
|
||||
|
||||
assert provider.mode == "local"
|
||||
assert provider.config == config_mock
|
||||
# Verify that an attribute was created
|
||||
assert provider.nodes is not None
|
||||
|
||||
def test_service_provider_remote_mode():
|
||||
config_mock = MagicMock()
|
||||
with patch("connpy.services.provider.ConfigService", create=True) as MockConfigService, \
|
||||
patch("grpc.insecure_channel", create=True) as MockChannel:
|
||||
|
||||
provider = ServiceProvider(config_mock, mode="remote", remote_host="localhost:50051")
|
||||
|
||||
# Verify ConfigService is initialized locally
|
||||
assert provider.config_svc is not None
|
||||
|
||||
# Verify grpc channel was created
|
||||
MockChannel.assert_called_once_with("localhost:50051")
|
||||
|
||||
# Verify a stub was assigned
|
||||
assert provider.nodes is not None
|
||||
|
||||
def test_service_provider_unknown_mode():
|
||||
config_mock = MagicMock()
|
||||
with pytest.raises(ValueError, match="Unknown service mode: invalid_mode"):
|
||||
ServiceProvider(config_mock, mode="invalid_mode")
|
||||
@@ -1,103 +0,0 @@
|
||||
"""Tests for connpy.services.sync_service"""
|
||||
import pytest
|
||||
import os
|
||||
from unittest.mock import MagicMock, patch
|
||||
from connpy.services.sync_service import SyncService
|
||||
|
||||
@pytest.fixture
|
||||
def mock_config():
|
||||
config = MagicMock()
|
||||
config.defaultdir = "/fake/dir"
|
||||
config.file = "/fake/dir/config.yaml"
|
||||
config.key = "/fake/dir/.osk"
|
||||
config.cachefile = "/fake/dir/.cache"
|
||||
config.fzf_cachefile = "/fake/dir/.fzf_cache"
|
||||
config.config = {"sync": True, "sync_remote": False}
|
||||
return config
|
||||
|
||||
class TestSyncService:
|
||||
def test_init(self, mock_config):
|
||||
s = SyncService(mock_config)
|
||||
assert s.sync_enabled is True
|
||||
assert s.token_file == os.path.join("/fake/dir", "gtoken.json")
|
||||
|
||||
@patch("connpy.services.sync_service.os.path.exists")
|
||||
@patch("connpy.services.sync_service.Credentials")
|
||||
def test_get_credentials_success(self, MockCreds, mock_exists, mock_config):
|
||||
mock_exists.return_value = True
|
||||
mock_cred_instance = MagicMock()
|
||||
mock_cred_instance.valid = True
|
||||
MockCreds.from_authorized_user_file.return_value = mock_cred_instance
|
||||
|
||||
s = SyncService(mock_config)
|
||||
creds = s.get_credentials()
|
||||
assert creds == mock_cred_instance
|
||||
|
||||
@patch("connpy.services.sync_service.os.path.exists")
|
||||
def test_get_credentials_not_found(self, mock_exists, mock_config):
|
||||
mock_exists.return_value = False
|
||||
s = SyncService(mock_config)
|
||||
assert s.get_credentials() is None
|
||||
|
||||
@patch("connpy.services.sync_service.zipfile.ZipFile")
|
||||
@patch("connpy.services.sync_service.os.path.exists")
|
||||
@patch("connpy.services.sync_service.os.path.basename")
|
||||
def test_compress_and_upload_local(self, mock_basename, mock_exists, MockZipFile, mock_config):
|
||||
mock_basename.return_value = "config.yaml"
|
||||
mock_exists.return_value = True
|
||||
s = SyncService(mock_config)
|
||||
|
||||
# Mocking list_backups and upload_file to avoid real API calls
|
||||
s.list_backups = MagicMock(return_value=[])
|
||||
s.upload_file = MagicMock(return_value=True)
|
||||
|
||||
zip_mock = MagicMock()
|
||||
MockZipFile.return_value.__enter__.return_value = zip_mock
|
||||
|
||||
s.compress_and_upload()
|
||||
# Verify zip was created with local config and key
|
||||
zip_mock.write.assert_any_call(s.config.file, "config.yaml")
|
||||
zip_mock.write.assert_any_call(s.config.key, ".osk")
|
||||
|
||||
@patch("connpy.services.sync_service.zipfile.ZipFile")
|
||||
@patch("connpy.services.sync_service.os.path.exists")
|
||||
@patch("connpy.services.sync_service.os.path.dirname")
|
||||
@patch("connpy.services.sync_service.os.remove")
|
||||
def test_perform_restore(self, mock_remove, mock_dirname, mock_exists, MockZipFile, mock_config):
|
||||
mock_dirname.return_value = "/fake/dir"
|
||||
# Mock exists to return True for key and zip, but False for caches during the cleanup phase
|
||||
def exists_side_effect(path):
|
||||
if ".cache" in path or ".fzf_cache" in path:
|
||||
return False
|
||||
return True
|
||||
mock_exists.side_effect = exists_side_effect
|
||||
|
||||
s = SyncService(mock_config)
|
||||
zip_mock = MagicMock()
|
||||
zip_mock.namelist.return_value = ["config.yaml", ".osk"]
|
||||
MockZipFile.return_value.__enter__.return_value = zip_mock
|
||||
|
||||
with patch("connpy.services.sync_service.yaml.safe_load") as mock_load:
|
||||
mock_load.return_value = {"connections": {}, "profiles": {}, "config": {}}
|
||||
assert s.perform_restore("/fake/zip.zip") is True
|
||||
|
||||
zip_mock.extract.assert_any_call(".osk", "/fake/dir")
|
||||
|
||||
@patch.object(SyncService, "get_credentials")
|
||||
@patch("connpy.services.sync_service.build")
|
||||
def test_list_backups(self, mock_build, mock_get_credentials, mock_config):
|
||||
mock_get_credentials.return_value = MagicMock()
|
||||
mock_service = MagicMock()
|
||||
mock_build.return_value = mock_service
|
||||
|
||||
mock_service.files().list().execute.return_value = {
|
||||
"files": [
|
||||
{"id": "1", "name": "backup1.zip", "appProperties": {"timestamp": "1000", "date": "2024"}}
|
||||
]
|
||||
}
|
||||
|
||||
s = SyncService(mock_config)
|
||||
files = s.list_backups()
|
||||
assert len(files) == 1
|
||||
assert files[0]["id"] == "1"
|
||||
assert files[0]["timestamp"] == "1000"
|
||||
@@ -1,171 +0,0 @@
|
||||
import asyncio
|
||||
import os
|
||||
import sys
|
||||
import termios
|
||||
import tty
|
||||
import signal
|
||||
import struct
|
||||
import fcntl
|
||||
|
||||
class LocalStream:
|
||||
"""
|
||||
Asynchronous stream wrapper for local stdin/stdout.
|
||||
Handles terminal raw mode, async I/O, and SIGWINCH signals.
|
||||
"""
|
||||
def __init__(self):
|
||||
self.stdin_fd = sys.stdin.fileno()
|
||||
self.stdout_fd = sys.stdout.fileno()
|
||||
self.original_tty_settings = None
|
||||
self.resize_callback = None
|
||||
self._reader_queue = asyncio.Queue()
|
||||
self._loop = None
|
||||
|
||||
def setup(self, resize_callback=None):
|
||||
self._loop = asyncio.get_running_loop()
|
||||
self.resize_callback = resize_callback
|
||||
|
||||
# Save original terminal settings
|
||||
try:
|
||||
self.original_tty_settings = termios.tcgetattr(self.stdin_fd)
|
||||
tty.setraw(self.stdin_fd)
|
||||
except termios.error:
|
||||
# Not a TTY, maybe piped or redirected
|
||||
pass
|
||||
|
||||
# Set stdin non-blocking
|
||||
flags = fcntl.fcntl(self.stdin_fd, fcntl.F_GETFL)
|
||||
fcntl.fcntl(self.stdin_fd, fcntl.F_SETFL, flags | os.O_NONBLOCK)
|
||||
|
||||
# Setup read callback
|
||||
self._loop.add_reader(self.stdin_fd, self._read_ready)
|
||||
|
||||
# Register SIGWINCH
|
||||
if resize_callback:
|
||||
try:
|
||||
self._loop.add_signal_handler(signal.SIGWINCH, self._handle_winch)
|
||||
except (NotImplementedError, RuntimeError):
|
||||
# signal handling not supported on some loops (e.g., Windows Proactor)
|
||||
pass
|
||||
|
||||
def teardown(self):
|
||||
if self._loop:
|
||||
try:
|
||||
self._loop.remove_reader(self.stdin_fd)
|
||||
except Exception:
|
||||
pass
|
||||
if self.resize_callback:
|
||||
try:
|
||||
self._loop.remove_signal_handler(signal.SIGWINCH)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Restore terminal settings
|
||||
if self.original_tty_settings is not None:
|
||||
try:
|
||||
termios.tcsetattr(self.stdin_fd, termios.TCSADRAIN, self.original_tty_settings)
|
||||
except termios.error:
|
||||
pass
|
||||
|
||||
# Restore blocking mode for stdin
|
||||
try:
|
||||
flags = fcntl.fcntl(self.stdin_fd, fcntl.F_GETFL)
|
||||
fcntl.fcntl(self.stdin_fd, fcntl.F_SETFL, flags & ~os.O_NONBLOCK)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def _read_ready(self):
|
||||
try:
|
||||
# Read whatever is available
|
||||
data = os.read(self.stdin_fd, 4096)
|
||||
if data:
|
||||
self._reader_queue.put_nowait(data)
|
||||
else:
|
||||
self._reader_queue.put_nowait(b'') # EOF
|
||||
except BlockingIOError:
|
||||
pass
|
||||
except OSError:
|
||||
self._reader_queue.put_nowait(b'') # EOF on error
|
||||
|
||||
async def read(self) -> bytes:
|
||||
"""Asynchronously read bytes from stdin."""
|
||||
return await self._reader_queue.get()
|
||||
|
||||
async def write(self, data: bytes):
|
||||
"""Asynchronously write bytes to stdout."""
|
||||
if not data:
|
||||
return
|
||||
|
||||
try:
|
||||
os.write(self.stdout_fd, data)
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
def _handle_winch(self):
|
||||
if self.resize_callback:
|
||||
try:
|
||||
# Use ioctl to get the current window size
|
||||
s = struct.pack("HHHH", 0, 0, 0, 0)
|
||||
a = fcntl.ioctl(self.stdout_fd, termios.TIOCGWINSZ, s)
|
||||
rows, cols, _, _ = struct.unpack("HHHH", a)
|
||||
|
||||
# We schedule the callback safely inside the asyncio loop
|
||||
# instead of running it raw in the signal handler
|
||||
self._loop.call_soon(self.resize_callback, rows, cols)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
import threading
|
||||
|
||||
class RemoteStream:
|
||||
"""
|
||||
Asynchronous stream wrapper for gRPC remote connections.
|
||||
Bridges the blocking gRPC iterators with the async _async_interact_loop.
|
||||
"""
|
||||
def __init__(self, request_iterator, response_queue):
|
||||
self.request_iterator = request_iterator
|
||||
self.response_queue = response_queue
|
||||
self.running = True
|
||||
self._reader_queue = asyncio.Queue()
|
||||
self.resize_callback = None
|
||||
self._loop = None
|
||||
self.t = None
|
||||
|
||||
def setup(self, resize_callback=None):
|
||||
self._loop = asyncio.get_running_loop()
|
||||
self.resize_callback = resize_callback
|
||||
|
||||
def read_requests():
|
||||
try:
|
||||
for req in self.request_iterator:
|
||||
if not self.running:
|
||||
break
|
||||
if req.cols > 0 and req.rows > 0:
|
||||
if self.resize_callback:
|
||||
self._loop.call_soon_threadsafe(self.resize_callback, req.rows, req.cols)
|
||||
if req.stdin_data:
|
||||
self._loop.call_soon_threadsafe(self._reader_queue.put_nowait, req.stdin_data)
|
||||
except Exception:
|
||||
pass
|
||||
finally:
|
||||
if self._loop and not self._loop.is_closed():
|
||||
try:
|
||||
self._loop.call_soon_threadsafe(self._reader_queue.put_nowait, b'')
|
||||
except RuntimeError:
|
||||
pass
|
||||
|
||||
self.t = threading.Thread(target=read_requests, daemon=True)
|
||||
self.t.start()
|
||||
|
||||
def teardown(self):
|
||||
self.running = False
|
||||
self.response_queue.put(None) # Signal EOF
|
||||
|
||||
async def read(self) -> bytes:
|
||||
"""Asynchronously read bytes from the gRPC iterator queue."""
|
||||
return await self._reader_queue.get()
|
||||
|
||||
async def write(self, data: bytes):
|
||||
"""Asynchronously write bytes to the gRPC response queue."""
|
||||
if data:
|
||||
self.response_queue.put(data)
|
||||
@@ -1,373 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.6">
|
||||
<title>connpy.cli.ai_handler API documentation</title>
|
||||
<meta name="description" content="">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.cli.ai_handler</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-classes">Classes</h2>
|
||||
<dl>
|
||||
<dt id="connpy.cli.ai_handler.AIHandler"><code class="flex name class">
|
||||
<span>class <span class="ident">AIHandler</span></span>
|
||||
<span>(</span><span>app)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">class AIHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
|
||||
def dispatch(self, args):
|
||||
if args.list_sessions:
|
||||
sessions = self.app.services.ai.list_sessions()
|
||||
if not sessions:
|
||||
printer.info("No saved AI sessions found.")
|
||||
return
|
||||
columns = ["ID", "Title", "Created At", "Model"]
|
||||
rows = [[s["id"], s["title"], s["created_at"], s["model"]] for s in sessions]
|
||||
printer.table("AI Persisted Sessions", columns, rows)
|
||||
return
|
||||
|
||||
if args.delete_session:
|
||||
try:
|
||||
self.app.services.ai.delete_session(args.delete_session[0])
|
||||
printer.success(f"Session {args.delete_session[0]} deleted.")
|
||||
except Exception as e:
|
||||
printer.error(str(e))
|
||||
return
|
||||
|
||||
# Determinar session_id para retomar
|
||||
session_id = None
|
||||
if args.resume:
|
||||
sessions = self.app.services.ai.list_sessions()
|
||||
session_id = sessions[0]["id"] if sessions else None
|
||||
if not session_id:
|
||||
printer.warning("No previous session found to resume.")
|
||||
elif args.session:
|
||||
session_id = args.session[0]
|
||||
|
||||
# Configurar argumentos adicionales para el servicio de AI
|
||||
# Prioridad: CLI Args > Configuración Local
|
||||
settings = self.app.services.config_svc.get_settings().get("ai", {})
|
||||
arguments = {}
|
||||
|
||||
for key in ["engineer_model", "engineer_api_key", "architect_model", "architect_api_key"]:
|
||||
cli_val = getattr(args, key, None)
|
||||
if cli_val:
|
||||
arguments[key] = cli_val[0]
|
||||
elif settings.get(key):
|
||||
arguments[key] = settings.get(key)
|
||||
|
||||
# Check keys only if running in local mode (not remote)
|
||||
if getattr(self.app.services, "mode", "local") == "local":
|
||||
if not arguments.get("engineer_api_key"):
|
||||
printer.error("Engineer API key not configured. The chat cannot start.")
|
||||
printer.info("Use 'connpy config --engineer-api-key <key>' to set it.")
|
||||
sys.exit(1)
|
||||
if not arguments.get("architect_api_key"):
|
||||
printer.warning("Architect API key not configured. Architect will be unavailable.")
|
||||
printer.info("Use 'connpy config --architect-api-key <key>' to enable it.")
|
||||
|
||||
# El resto de la interacción el CLI la maneja con el agente subyacente
|
||||
self.app.myai = self.app.services.ai
|
||||
self.ai_overrides = arguments
|
||||
|
||||
if args.ask:
|
||||
self.single_question(args, session_id)
|
||||
else:
|
||||
self.interactive_chat(args, session_id)
|
||||
|
||||
def single_question(self, args, session_id):
|
||||
query = " ".join(args.ask)
|
||||
with console.status("[ai_status]Agent is thinking and analyzing...") as status:
|
||||
result = self.app.myai.ask(query, status=status, debug=args.debug, session_id=session_id, trust=args.trust, **self.ai_overrides)
|
||||
|
||||
responder = result.get("responder", "engineer")
|
||||
border = "architect" if responder == "architect" else "engineer"
|
||||
title = "[architect][bold]Network Architect[/bold][/architect]" if responder == "architect" else "[engineer][bold]Network Engineer[/bold][/engineer]"
|
||||
|
||||
if not result.get("streamed"):
|
||||
mdprint(Panel(Markdown(result["response"]), title=title, border_style=border, expand=False))
|
||||
|
||||
if "usage" in result:
|
||||
u = result["usage"]
|
||||
console.print(f"[debug]Tokens: {u['total']} (Input: {u['input']}, Output: {u['output']})[/debug]")
|
||||
|
||||
def interactive_chat(self, args, session_id):
|
||||
history = None
|
||||
if session_id:
|
||||
session_data = self.app.myai.load_session_data(session_id)
|
||||
if session_data:
|
||||
history = session_data.get("history", [])
|
||||
mdprint(Rule(title=f"[header] Resuming Session: {session_data.get('title')} [/header]", style="border"))
|
||||
if history:
|
||||
mdprint(f"[debug]Analyzing {len(history)} previous messages...[/debug]\n")
|
||||
else:
|
||||
printer.error(f"Could not load session {session_id}. Starting clean.")
|
||||
|
||||
if not history:
|
||||
mdprint(Rule(style="engineer"))
|
||||
mdprint(Markdown("**Networking Expert Agent**: Hi! I'm your assistant. I can help you diagnose issues, run commands, and manage your nodes.\nType 'exit' to quit.\n"))
|
||||
mdprint(Rule(style="engineer"))
|
||||
|
||||
while True:
|
||||
try:
|
||||
user_query = Prompt.ask("[user_prompt]User[/user_prompt]")
|
||||
if not user_query.strip(): continue
|
||||
if user_query.lower() in ['exit', 'quit', 'bye']: break
|
||||
|
||||
with console.status("[ai_status]Agent is thinking...") as status:
|
||||
result = self.app.myai.ask(user_query, chat_history=history, status=status, debug=args.debug, trust=args.trust, **self.ai_overrides)
|
||||
|
||||
new_history = result.get("chat_history")
|
||||
if new_history is not None:
|
||||
history = new_history
|
||||
|
||||
responder = result.get("responder", "engineer")
|
||||
border = "architect" if responder == "architect" else "engineer"
|
||||
title = "[architect][bold]Network Architect[/bold][/architect]" if responder == "architect" else "[engineer][bold]Network Engineer[/bold][/engineer]"
|
||||
|
||||
if not result.get("streamed"):
|
||||
response_text = result.get("response", "")
|
||||
if response_text:
|
||||
mdprint(Panel(Markdown(response_text), title=title, border_style=border, expand=False))
|
||||
|
||||
if "usage" in result:
|
||||
u = result["usage"]
|
||||
console.print(f"[debug]Tokens: {u['total']} (Input: {u['input']}, Output: {u['output']})[/debug]")
|
||||
except (KeyboardInterrupt, EOFError):
|
||||
console.print("\n[dim]Session closed.[/dim]")
|
||||
break</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
<h3>Methods</h3>
|
||||
<dl>
|
||||
<dt id="connpy.cli.ai_handler.AIHandler.dispatch"><code class="name flex">
|
||||
<span>def <span class="ident">dispatch</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def dispatch(self, args):
|
||||
if args.list_sessions:
|
||||
sessions = self.app.services.ai.list_sessions()
|
||||
if not sessions:
|
||||
printer.info("No saved AI sessions found.")
|
||||
return
|
||||
columns = ["ID", "Title", "Created At", "Model"]
|
||||
rows = [[s["id"], s["title"], s["created_at"], s["model"]] for s in sessions]
|
||||
printer.table("AI Persisted Sessions", columns, rows)
|
||||
return
|
||||
|
||||
if args.delete_session:
|
||||
try:
|
||||
self.app.services.ai.delete_session(args.delete_session[0])
|
||||
printer.success(f"Session {args.delete_session[0]} deleted.")
|
||||
except Exception as e:
|
||||
printer.error(str(e))
|
||||
return
|
||||
|
||||
# Determinar session_id para retomar
|
||||
session_id = None
|
||||
if args.resume:
|
||||
sessions = self.app.services.ai.list_sessions()
|
||||
session_id = sessions[0]["id"] if sessions else None
|
||||
if not session_id:
|
||||
printer.warning("No previous session found to resume.")
|
||||
elif args.session:
|
||||
session_id = args.session[0]
|
||||
|
||||
# Configurar argumentos adicionales para el servicio de AI
|
||||
# Prioridad: CLI Args > Configuración Local
|
||||
settings = self.app.services.config_svc.get_settings().get("ai", {})
|
||||
arguments = {}
|
||||
|
||||
for key in ["engineer_model", "engineer_api_key", "architect_model", "architect_api_key"]:
|
||||
cli_val = getattr(args, key, None)
|
||||
if cli_val:
|
||||
arguments[key] = cli_val[0]
|
||||
elif settings.get(key):
|
||||
arguments[key] = settings.get(key)
|
||||
|
||||
# Check keys only if running in local mode (not remote)
|
||||
if getattr(self.app.services, "mode", "local") == "local":
|
||||
if not arguments.get("engineer_api_key"):
|
||||
printer.error("Engineer API key not configured. The chat cannot start.")
|
||||
printer.info("Use 'connpy config --engineer-api-key <key>' to set it.")
|
||||
sys.exit(1)
|
||||
if not arguments.get("architect_api_key"):
|
||||
printer.warning("Architect API key not configured. Architect will be unavailable.")
|
||||
printer.info("Use 'connpy config --architect-api-key <key>' to enable it.")
|
||||
|
||||
# El resto de la interacción el CLI la maneja con el agente subyacente
|
||||
self.app.myai = self.app.services.ai
|
||||
self.ai_overrides = arguments
|
||||
|
||||
if args.ask:
|
||||
self.single_question(args, session_id)
|
||||
else:
|
||||
self.interactive_chat(args, session_id)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.ai_handler.AIHandler.interactive_chat"><code class="name flex">
|
||||
<span>def <span class="ident">interactive_chat</span></span>(<span>self, args, session_id)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def interactive_chat(self, args, session_id):
|
||||
history = None
|
||||
if session_id:
|
||||
session_data = self.app.myai.load_session_data(session_id)
|
||||
if session_data:
|
||||
history = session_data.get("history", [])
|
||||
mdprint(Rule(title=f"[header] Resuming Session: {session_data.get('title')} [/header]", style="border"))
|
||||
if history:
|
||||
mdprint(f"[debug]Analyzing {len(history)} previous messages...[/debug]\n")
|
||||
else:
|
||||
printer.error(f"Could not load session {session_id}. Starting clean.")
|
||||
|
||||
if not history:
|
||||
mdprint(Rule(style="engineer"))
|
||||
mdprint(Markdown("**Networking Expert Agent**: Hi! I'm your assistant. I can help you diagnose issues, run commands, and manage your nodes.\nType 'exit' to quit.\n"))
|
||||
mdprint(Rule(style="engineer"))
|
||||
|
||||
while True:
|
||||
try:
|
||||
user_query = Prompt.ask("[user_prompt]User[/user_prompt]")
|
||||
if not user_query.strip(): continue
|
||||
if user_query.lower() in ['exit', 'quit', 'bye']: break
|
||||
|
||||
with console.status("[ai_status]Agent is thinking...") as status:
|
||||
result = self.app.myai.ask(user_query, chat_history=history, status=status, debug=args.debug, trust=args.trust, **self.ai_overrides)
|
||||
|
||||
new_history = result.get("chat_history")
|
||||
if new_history is not None:
|
||||
history = new_history
|
||||
|
||||
responder = result.get("responder", "engineer")
|
||||
border = "architect" if responder == "architect" else "engineer"
|
||||
title = "[architect][bold]Network Architect[/bold][/architect]" if responder == "architect" else "[engineer][bold]Network Engineer[/bold][/engineer]"
|
||||
|
||||
if not result.get("streamed"):
|
||||
response_text = result.get("response", "")
|
||||
if response_text:
|
||||
mdprint(Panel(Markdown(response_text), title=title, border_style=border, expand=False))
|
||||
|
||||
if "usage" in result:
|
||||
u = result["usage"]
|
||||
console.print(f"[debug]Tokens: {u['total']} (Input: {u['input']}, Output: {u['output']})[/debug]")
|
||||
except (KeyboardInterrupt, EOFError):
|
||||
console.print("\n[dim]Session closed.[/dim]")
|
||||
break</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.ai_handler.AIHandler.single_question"><code class="name flex">
|
||||
<span>def <span class="ident">single_question</span></span>(<span>self, args, session_id)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def single_question(self, args, session_id):
|
||||
query = " ".join(args.ask)
|
||||
with console.status("[ai_status]Agent is thinking and analyzing...") as status:
|
||||
result = self.app.myai.ask(query, status=status, debug=args.debug, session_id=session_id, trust=args.trust, **self.ai_overrides)
|
||||
|
||||
responder = result.get("responder", "engineer")
|
||||
border = "architect" if responder == "architect" else "engineer"
|
||||
title = "[architect][bold]Network Architect[/bold][/architect]" if responder == "architect" else "[engineer][bold]Network Engineer[/bold][/engineer]"
|
||||
|
||||
if not result.get("streamed"):
|
||||
mdprint(Panel(Markdown(result["response"]), title=title, border_style=border, expand=False))
|
||||
|
||||
if "usage" in result:
|
||||
u = result["usage"]
|
||||
console.print(f"[debug]Tokens: {u['total']} (Input: {u['input']}, Output: {u['output']})[/debug]")</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.cli" href="index.html">connpy.cli</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-classes">Classes</a></h3>
|
||||
<ul>
|
||||
<li>
|
||||
<h4><code><a title="connpy.cli.ai_handler.AIHandler" href="#connpy.cli.ai_handler.AIHandler">AIHandler</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.cli.ai_handler.AIHandler.dispatch" href="#connpy.cli.ai_handler.AIHandler.dispatch">dispatch</a></code></li>
|
||||
<li><code><a title="connpy.cli.ai_handler.AIHandler.interactive_chat" href="#connpy.cli.ai_handler.AIHandler.interactive_chat">interactive_chat</a></code></li>
|
||||
<li><code><a title="connpy.cli.ai_handler.AIHandler.single_question" href="#connpy.cli.ai_handler.AIHandler.single_question">single_question</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.6</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,199 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.6">
|
||||
<title>connpy.cli.api_handler API documentation</title>
|
||||
<meta name="description" content="">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.cli.api_handler</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-classes">Classes</h2>
|
||||
<dl>
|
||||
<dt id="connpy.cli.api_handler.APIHandler"><code class="flex name class">
|
||||
<span>class <span class="ident">APIHandler</span></span>
|
||||
<span>(</span><span>app)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">class APIHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
|
||||
def dispatch(self, args):
|
||||
try:
|
||||
status = self.app.services.system.get_api_status()
|
||||
|
||||
if args.command == "stop":
|
||||
if not status["running"]:
|
||||
printer.warning("API does not seem to be running.")
|
||||
else:
|
||||
stopped = self.app.services.system.stop_api()
|
||||
if stopped:
|
||||
printer.success("API stopped successfully.")
|
||||
|
||||
elif args.command == "restart":
|
||||
port = args.data if args.data and isinstance(args.data, int) else None
|
||||
if status["running"]:
|
||||
printer.info(f"Stopping server with process ID {status['pid']}...")
|
||||
|
||||
# Service handles port preservation if port is None
|
||||
self.app.services.system.restart_api(port=port)
|
||||
|
||||
if status["running"]:
|
||||
printer.info(f"Server with process ID {status['pid']} stopped.")
|
||||
|
||||
# Re-fetch status to show the actual port used
|
||||
new_status = self.app.services.system.get_api_status()
|
||||
printer.success(f"API restarted on port {new_status.get('port', 'unknown')}.")
|
||||
|
||||
elif args.command == "start":
|
||||
if status["running"]:
|
||||
msg = f"Connpy server is already running (PID: {status['pid']}"
|
||||
if status.get("port"):
|
||||
msg += f", Port: {status['port']}"
|
||||
msg += ")."
|
||||
printer.warning(msg)
|
||||
else:
|
||||
port = args.data if args.data and isinstance(args.data, int) else 8048
|
||||
self.app.services.system.start_api(port=port)
|
||||
printer.success(f"API started on port {port}.")
|
||||
|
||||
elif args.command == "debug":
|
||||
port = args.data if args.data and isinstance(args.data, int) else 8048
|
||||
self.app.services.system.debug_api(port=port)
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
<h3>Methods</h3>
|
||||
<dl>
|
||||
<dt id="connpy.cli.api_handler.APIHandler.dispatch"><code class="name flex">
|
||||
<span>def <span class="ident">dispatch</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def dispatch(self, args):
|
||||
try:
|
||||
status = self.app.services.system.get_api_status()
|
||||
|
||||
if args.command == "stop":
|
||||
if not status["running"]:
|
||||
printer.warning("API does not seem to be running.")
|
||||
else:
|
||||
stopped = self.app.services.system.stop_api()
|
||||
if stopped:
|
||||
printer.success("API stopped successfully.")
|
||||
|
||||
elif args.command == "restart":
|
||||
port = args.data if args.data and isinstance(args.data, int) else None
|
||||
if status["running"]:
|
||||
printer.info(f"Stopping server with process ID {status['pid']}...")
|
||||
|
||||
# Service handles port preservation if port is None
|
||||
self.app.services.system.restart_api(port=port)
|
||||
|
||||
if status["running"]:
|
||||
printer.info(f"Server with process ID {status['pid']} stopped.")
|
||||
|
||||
# Re-fetch status to show the actual port used
|
||||
new_status = self.app.services.system.get_api_status()
|
||||
printer.success(f"API restarted on port {new_status.get('port', 'unknown')}.")
|
||||
|
||||
elif args.command == "start":
|
||||
if status["running"]:
|
||||
msg = f"Connpy server is already running (PID: {status['pid']}"
|
||||
if status.get("port"):
|
||||
msg += f", Port: {status['port']}"
|
||||
msg += ")."
|
||||
printer.warning(msg)
|
||||
else:
|
||||
port = args.data if args.data and isinstance(args.data, int) else 8048
|
||||
self.app.services.system.start_api(port=port)
|
||||
printer.success(f"API started on port {port}.")
|
||||
|
||||
elif args.command == "debug":
|
||||
port = args.data if args.data and isinstance(args.data, int) else 8048
|
||||
self.app.services.system.debug_api(port=port)
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.cli" href="index.html">connpy.cli</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-classes">Classes</a></h3>
|
||||
<ul>
|
||||
<li>
|
||||
<h4><code><a title="connpy.cli.api_handler.APIHandler" href="#connpy.cli.api_handler.APIHandler">APIHandler</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.cli.api_handler.APIHandler.dispatch" href="#connpy.cli.api_handler.APIHandler.dispatch">dispatch</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.6</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,488 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.6">
|
||||
<title>connpy.cli.config_handler API documentation</title>
|
||||
<meta name="description" content="">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.cli.config_handler</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-classes">Classes</h2>
|
||||
<dl>
|
||||
<dt id="connpy.cli.config_handler.ConfigHandler"><code class="flex name class">
|
||||
<span>class <span class="ident">ConfigHandler</span></span>
|
||||
<span>(</span><span>app)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">class ConfigHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
|
||||
def dispatch(self, args):
|
||||
actions = {
|
||||
"completion": self.show_completion,
|
||||
"fzf_wrapper": self.show_fzf_wrapper,
|
||||
"case": self.set_case,
|
||||
"fzf": self.set_fzf,
|
||||
"idletime": self.set_idletime,
|
||||
"configfolder": self.set_configfolder,
|
||||
"theme": self.set_theme,
|
||||
"engineer_model": self.set_ai_config,
|
||||
"engineer_api_key": self.set_ai_config,
|
||||
"architect_model": self.set_ai_config,
|
||||
"architect_api_key": self.set_ai_config,
|
||||
"trusted_commands": self.set_ai_config,
|
||||
"service_mode": self.set_service_mode,
|
||||
"remote_host": self.set_remote_host,
|
||||
"sync_remote": self.set_sync_remote
|
||||
}
|
||||
handler = actions.get(getattr(args, "command", None))
|
||||
if handler:
|
||||
return handler(args)
|
||||
|
||||
# If no specific command was triggered, show current configuration
|
||||
return self.show_config(args)
|
||||
|
||||
def show_config(self, args):
|
||||
settings = self.app.services.config_svc.get_settings()
|
||||
yaml_str = yaml.dump(settings, sort_keys=False, default_flow_style=False)
|
||||
printer.data("Current Configuration", yaml_str)
|
||||
|
||||
def set_service_mode(self, args):
|
||||
new_mode = args.data[0]
|
||||
if new_mode == "remote":
|
||||
settings = self.app.services.config_svc.get_settings()
|
||||
if not settings.get("remote_host"):
|
||||
printer.error("Remote host must be configured before switching to remote mode")
|
||||
return
|
||||
|
||||
self.app.services.config_svc.update_setting("service_mode", new_mode)
|
||||
|
||||
# Immediate sync of fzf/text cache files for the new mode
|
||||
try:
|
||||
# 1. Clear old cache files to avoid discrepancies if fetch fails
|
||||
self.app.config._generate_nodes_cache(nodes=[], folders=[], profiles=[])
|
||||
|
||||
# 2. Re-initialize services for the new mode
|
||||
from ..services.provider import ServiceProvider
|
||||
settings = self.app.services.config_svc.get_settings()
|
||||
new_services = ServiceProvider(self.app.config, mode=new_mode, remote_host=settings.get("remote_host"))
|
||||
|
||||
# 3. Fetch data from new mode and generate cache
|
||||
nodes = new_services.nodes.list_nodes()
|
||||
folders = new_services.nodes.list_folders()
|
||||
profiles = new_services.profiles.list_profiles()
|
||||
new_services.nodes.generate_cache(nodes=nodes, folders=folders, profiles=profiles)
|
||||
|
||||
printer.success("Config saved")
|
||||
except Exception as e:
|
||||
printer.success("Config saved")
|
||||
printer.warning(f"Note: Could not synchronize fzf cache: {e}")
|
||||
|
||||
|
||||
def set_remote_host(self, args):
|
||||
self.app.services.config_svc.update_setting("remote_host", args.data[0])
|
||||
printer.success("Config saved")
|
||||
|
||||
def set_theme(self, args):
|
||||
try:
|
||||
valid_styles = self.app.services.config_svc.apply_theme_from_file(args.data[0])
|
||||
# Apply immediately to current session
|
||||
printer.apply_theme(valid_styles)
|
||||
printer.success(f"Theme '{args.data[0]}' applied and saved")
|
||||
except (ConnpyError, InvalidConfigurationError) as e:
|
||||
printer.error(str(e))
|
||||
|
||||
def show_fzf_wrapper(self, args):
|
||||
print(get_instructions("fzf_wrapper_" + args.data[0]))
|
||||
|
||||
def show_completion(self, args):
|
||||
print(get_instructions(args.data[0] + "completion"))
|
||||
|
||||
def set_case(self, args):
|
||||
val = (args.data[0].lower() == "true")
|
||||
self.app.services.config_svc.update_setting("case", val)
|
||||
self.app.case = val
|
||||
printer.success("Config saved")
|
||||
|
||||
def set_fzf(self, args):
|
||||
val = (args.data[0].lower() == "true")
|
||||
self.app.services.config_svc.update_setting("fzf", val)
|
||||
self.app.fzf = val
|
||||
printer.success("Config saved")
|
||||
|
||||
def set_idletime(self, args):
|
||||
try:
|
||||
val = max(0, int(args.data[0]))
|
||||
self.app.services.config_svc.update_setting("idletime", val)
|
||||
printer.success("Config saved")
|
||||
except ValueError:
|
||||
printer.error("Keepalive must be an integer.")
|
||||
|
||||
def set_configfolder(self, args):
|
||||
try:
|
||||
self.app.services.config_svc.set_config_folder(args.data[0])
|
||||
printer.success("Config saved")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def set_sync_remote(self, args):
|
||||
val = (args.data[0].lower() == "true")
|
||||
self.app.services.config_svc.update_setting("sync_remote", val)
|
||||
self.app.services.sync.sync_remote = val
|
||||
printer.success("Config saved")
|
||||
|
||||
def set_ai_config(self, args):
|
||||
try:
|
||||
settings = self.app.services.config_svc.get_settings()
|
||||
aiconfig = settings.get("ai", {})
|
||||
aiconfig[args.command] = args.data[0]
|
||||
self.app.services.config_svc.update_setting("ai", aiconfig)
|
||||
printer.success("Config saved")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
<h3>Methods</h3>
|
||||
<dl>
|
||||
<dt id="connpy.cli.config_handler.ConfigHandler.dispatch"><code class="name flex">
|
||||
<span>def <span class="ident">dispatch</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def dispatch(self, args):
|
||||
actions = {
|
||||
"completion": self.show_completion,
|
||||
"fzf_wrapper": self.show_fzf_wrapper,
|
||||
"case": self.set_case,
|
||||
"fzf": self.set_fzf,
|
||||
"idletime": self.set_idletime,
|
||||
"configfolder": self.set_configfolder,
|
||||
"theme": self.set_theme,
|
||||
"engineer_model": self.set_ai_config,
|
||||
"engineer_api_key": self.set_ai_config,
|
||||
"architect_model": self.set_ai_config,
|
||||
"architect_api_key": self.set_ai_config,
|
||||
"trusted_commands": self.set_ai_config,
|
||||
"service_mode": self.set_service_mode,
|
||||
"remote_host": self.set_remote_host,
|
||||
"sync_remote": self.set_sync_remote
|
||||
}
|
||||
handler = actions.get(getattr(args, "command", None))
|
||||
if handler:
|
||||
return handler(args)
|
||||
|
||||
# If no specific command was triggered, show current configuration
|
||||
return self.show_config(args)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.config_handler.ConfigHandler.set_ai_config"><code class="name flex">
|
||||
<span>def <span class="ident">set_ai_config</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def set_ai_config(self, args):
|
||||
try:
|
||||
settings = self.app.services.config_svc.get_settings()
|
||||
aiconfig = settings.get("ai", {})
|
||||
aiconfig[args.command] = args.data[0]
|
||||
self.app.services.config_svc.update_setting("ai", aiconfig)
|
||||
printer.success("Config saved")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.config_handler.ConfigHandler.set_case"><code class="name flex">
|
||||
<span>def <span class="ident">set_case</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def set_case(self, args):
|
||||
val = (args.data[0].lower() == "true")
|
||||
self.app.services.config_svc.update_setting("case", val)
|
||||
self.app.case = val
|
||||
printer.success("Config saved")</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.config_handler.ConfigHandler.set_configfolder"><code class="name flex">
|
||||
<span>def <span class="ident">set_configfolder</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def set_configfolder(self, args):
|
||||
try:
|
||||
self.app.services.config_svc.set_config_folder(args.data[0])
|
||||
printer.success("Config saved")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.config_handler.ConfigHandler.set_fzf"><code class="name flex">
|
||||
<span>def <span class="ident">set_fzf</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def set_fzf(self, args):
|
||||
val = (args.data[0].lower() == "true")
|
||||
self.app.services.config_svc.update_setting("fzf", val)
|
||||
self.app.fzf = val
|
||||
printer.success("Config saved")</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.config_handler.ConfigHandler.set_idletime"><code class="name flex">
|
||||
<span>def <span class="ident">set_idletime</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def set_idletime(self, args):
|
||||
try:
|
||||
val = max(0, int(args.data[0]))
|
||||
self.app.services.config_svc.update_setting("idletime", val)
|
||||
printer.success("Config saved")
|
||||
except ValueError:
|
||||
printer.error("Keepalive must be an integer.")</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.config_handler.ConfigHandler.set_remote_host"><code class="name flex">
|
||||
<span>def <span class="ident">set_remote_host</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def set_remote_host(self, args):
|
||||
self.app.services.config_svc.update_setting("remote_host", args.data[0])
|
||||
printer.success("Config saved")</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.config_handler.ConfigHandler.set_service_mode"><code class="name flex">
|
||||
<span>def <span class="ident">set_service_mode</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def set_service_mode(self, args):
|
||||
new_mode = args.data[0]
|
||||
if new_mode == "remote":
|
||||
settings = self.app.services.config_svc.get_settings()
|
||||
if not settings.get("remote_host"):
|
||||
printer.error("Remote host must be configured before switching to remote mode")
|
||||
return
|
||||
|
||||
self.app.services.config_svc.update_setting("service_mode", new_mode)
|
||||
|
||||
# Immediate sync of fzf/text cache files for the new mode
|
||||
try:
|
||||
# 1. Clear old cache files to avoid discrepancies if fetch fails
|
||||
self.app.config._generate_nodes_cache(nodes=[], folders=[], profiles=[])
|
||||
|
||||
# 2. Re-initialize services for the new mode
|
||||
from ..services.provider import ServiceProvider
|
||||
settings = self.app.services.config_svc.get_settings()
|
||||
new_services = ServiceProvider(self.app.config, mode=new_mode, remote_host=settings.get("remote_host"))
|
||||
|
||||
# 3. Fetch data from new mode and generate cache
|
||||
nodes = new_services.nodes.list_nodes()
|
||||
folders = new_services.nodes.list_folders()
|
||||
profiles = new_services.profiles.list_profiles()
|
||||
new_services.nodes.generate_cache(nodes=nodes, folders=folders, profiles=profiles)
|
||||
|
||||
printer.success("Config saved")
|
||||
except Exception as e:
|
||||
printer.success("Config saved")
|
||||
printer.warning(f"Note: Could not synchronize fzf cache: {e}")</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.config_handler.ConfigHandler.set_sync_remote"><code class="name flex">
|
||||
<span>def <span class="ident">set_sync_remote</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def set_sync_remote(self, args):
|
||||
val = (args.data[0].lower() == "true")
|
||||
self.app.services.config_svc.update_setting("sync_remote", val)
|
||||
self.app.services.sync.sync_remote = val
|
||||
printer.success("Config saved")</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.config_handler.ConfigHandler.set_theme"><code class="name flex">
|
||||
<span>def <span class="ident">set_theme</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def set_theme(self, args):
|
||||
try:
|
||||
valid_styles = self.app.services.config_svc.apply_theme_from_file(args.data[0])
|
||||
# Apply immediately to current session
|
||||
printer.apply_theme(valid_styles)
|
||||
printer.success(f"Theme '{args.data[0]}' applied and saved")
|
||||
except (ConnpyError, InvalidConfigurationError) as e:
|
||||
printer.error(str(e))</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.config_handler.ConfigHandler.show_completion"><code class="name flex">
|
||||
<span>def <span class="ident">show_completion</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def show_completion(self, args):
|
||||
print(get_instructions(args.data[0] + "completion"))</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.config_handler.ConfigHandler.show_config"><code class="name flex">
|
||||
<span>def <span class="ident">show_config</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def show_config(self, args):
|
||||
settings = self.app.services.config_svc.get_settings()
|
||||
yaml_str = yaml.dump(settings, sort_keys=False, default_flow_style=False)
|
||||
printer.data("Current Configuration", yaml_str)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.config_handler.ConfigHandler.show_fzf_wrapper"><code class="name flex">
|
||||
<span>def <span class="ident">show_fzf_wrapper</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def show_fzf_wrapper(self, args):
|
||||
print(get_instructions("fzf_wrapper_" + args.data[0]))</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.cli" href="index.html">connpy.cli</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-classes">Classes</a></h3>
|
||||
<ul>
|
||||
<li>
|
||||
<h4><code><a title="connpy.cli.config_handler.ConfigHandler" href="#connpy.cli.config_handler.ConfigHandler">ConfigHandler</a></code></h4>
|
||||
<ul class="two-column">
|
||||
<li><code><a title="connpy.cli.config_handler.ConfigHandler.dispatch" href="#connpy.cli.config_handler.ConfigHandler.dispatch">dispatch</a></code></li>
|
||||
<li><code><a title="connpy.cli.config_handler.ConfigHandler.set_ai_config" href="#connpy.cli.config_handler.ConfigHandler.set_ai_config">set_ai_config</a></code></li>
|
||||
<li><code><a title="connpy.cli.config_handler.ConfigHandler.set_case" href="#connpy.cli.config_handler.ConfigHandler.set_case">set_case</a></code></li>
|
||||
<li><code><a title="connpy.cli.config_handler.ConfigHandler.set_configfolder" href="#connpy.cli.config_handler.ConfigHandler.set_configfolder">set_configfolder</a></code></li>
|
||||
<li><code><a title="connpy.cli.config_handler.ConfigHandler.set_fzf" href="#connpy.cli.config_handler.ConfigHandler.set_fzf">set_fzf</a></code></li>
|
||||
<li><code><a title="connpy.cli.config_handler.ConfigHandler.set_idletime" href="#connpy.cli.config_handler.ConfigHandler.set_idletime">set_idletime</a></code></li>
|
||||
<li><code><a title="connpy.cli.config_handler.ConfigHandler.set_remote_host" href="#connpy.cli.config_handler.ConfigHandler.set_remote_host">set_remote_host</a></code></li>
|
||||
<li><code><a title="connpy.cli.config_handler.ConfigHandler.set_service_mode" href="#connpy.cli.config_handler.ConfigHandler.set_service_mode">set_service_mode</a></code></li>
|
||||
<li><code><a title="connpy.cli.config_handler.ConfigHandler.set_sync_remote" href="#connpy.cli.config_handler.ConfigHandler.set_sync_remote">set_sync_remote</a></code></li>
|
||||
<li><code><a title="connpy.cli.config_handler.ConfigHandler.set_theme" href="#connpy.cli.config_handler.ConfigHandler.set_theme">set_theme</a></code></li>
|
||||
<li><code><a title="connpy.cli.config_handler.ConfigHandler.show_completion" href="#connpy.cli.config_handler.ConfigHandler.show_completion">show_completion</a></code></li>
|
||||
<li><code><a title="connpy.cli.config_handler.ConfigHandler.show_config" href="#connpy.cli.config_handler.ConfigHandler.show_config">show_config</a></code></li>
|
||||
<li><code><a title="connpy.cli.config_handler.ConfigHandler.show_fzf_wrapper" href="#connpy.cli.config_handler.ConfigHandler.show_fzf_wrapper">show_fzf_wrapper</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.6</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,255 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.6">
|
||||
<title>connpy.cli.context_handler API documentation</title>
|
||||
<meta name="description" content="">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.cli.context_handler</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-classes">Classes</h2>
|
||||
<dl>
|
||||
<dt id="connpy.cli.context_handler.ContextHandler"><code class="flex name class">
|
||||
<span>class <span class="ident">ContextHandler</span></span>
|
||||
<span>(</span><span>app)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">class ContextHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
self.service = self.app.services.context
|
||||
|
||||
def dispatch(self, args):
|
||||
try:
|
||||
if args.add:
|
||||
if len(args.add) < 2:
|
||||
printer.error("--add requires name and at least one regex")
|
||||
return
|
||||
self.service.add_context(args.add[0], args.add[1:])
|
||||
printer.success(f"Context '{args.add[0]}' added successfully.")
|
||||
|
||||
elif args.rm:
|
||||
if not args.context_name:
|
||||
printer.error("--rm requires a context name")
|
||||
return
|
||||
self.service.delete_context(args.context_name)
|
||||
printer.success(f"Context '{args.context_name}' deleted successfully.")
|
||||
|
||||
elif args.ls:
|
||||
contexts = self.service.list_contexts()
|
||||
for ctx in contexts:
|
||||
if ctx["active"]:
|
||||
printer.success(f"{ctx['name']} (active)")
|
||||
else:
|
||||
printer.custom(" ", ctx["name"])
|
||||
|
||||
elif args.set:
|
||||
if not args.context_name:
|
||||
printer.error("--set requires a context name")
|
||||
return
|
||||
self.service.set_active_context(args.context_name)
|
||||
printer.success(f"Context set to: {args.context_name}")
|
||||
|
||||
elif args.show:
|
||||
if not args.context_name:
|
||||
printer.error("--show requires a context name")
|
||||
return
|
||||
contexts = self.service.contexts
|
||||
if args.context_name not in contexts:
|
||||
printer.error(f"Context '{args.context_name}' does not exist")
|
||||
return
|
||||
yaml_output = yaml.dump(contexts[args.context_name], sort_keys=False, default_flow_style=False)
|
||||
printer.custom(args.context_name, "")
|
||||
print(yaml_output)
|
||||
|
||||
elif args.edit:
|
||||
if len(args.edit) < 2:
|
||||
printer.error("--edit requires name and at least one regex")
|
||||
return
|
||||
self.service.update_context(args.edit[0], args.edit[1:])
|
||||
printer.success(f"Context '{args.edit[0]}' modified successfully.")
|
||||
|
||||
else:
|
||||
# Default behavior if no flags: show list
|
||||
self.dispatch_ls(args)
|
||||
|
||||
except ValueError as e:
|
||||
printer.error(str(e))
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def dispatch_ls(self, args):
|
||||
contexts = self.service.list_contexts()
|
||||
for ctx in contexts:
|
||||
if ctx["active"]:
|
||||
printer.success(f"{ctx['name']} (active)")
|
||||
else:
|
||||
printer.custom(" ", ctx["name"])</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
<h3>Methods</h3>
|
||||
<dl>
|
||||
<dt id="connpy.cli.context_handler.ContextHandler.dispatch"><code class="name flex">
|
||||
<span>def <span class="ident">dispatch</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def dispatch(self, args):
|
||||
try:
|
||||
if args.add:
|
||||
if len(args.add) < 2:
|
||||
printer.error("--add requires name and at least one regex")
|
||||
return
|
||||
self.service.add_context(args.add[0], args.add[1:])
|
||||
printer.success(f"Context '{args.add[0]}' added successfully.")
|
||||
|
||||
elif args.rm:
|
||||
if not args.context_name:
|
||||
printer.error("--rm requires a context name")
|
||||
return
|
||||
self.service.delete_context(args.context_name)
|
||||
printer.success(f"Context '{args.context_name}' deleted successfully.")
|
||||
|
||||
elif args.ls:
|
||||
contexts = self.service.list_contexts()
|
||||
for ctx in contexts:
|
||||
if ctx["active"]:
|
||||
printer.success(f"{ctx['name']} (active)")
|
||||
else:
|
||||
printer.custom(" ", ctx["name"])
|
||||
|
||||
elif args.set:
|
||||
if not args.context_name:
|
||||
printer.error("--set requires a context name")
|
||||
return
|
||||
self.service.set_active_context(args.context_name)
|
||||
printer.success(f"Context set to: {args.context_name}")
|
||||
|
||||
elif args.show:
|
||||
if not args.context_name:
|
||||
printer.error("--show requires a context name")
|
||||
return
|
||||
contexts = self.service.contexts
|
||||
if args.context_name not in contexts:
|
||||
printer.error(f"Context '{args.context_name}' does not exist")
|
||||
return
|
||||
yaml_output = yaml.dump(contexts[args.context_name], sort_keys=False, default_flow_style=False)
|
||||
printer.custom(args.context_name, "")
|
||||
print(yaml_output)
|
||||
|
||||
elif args.edit:
|
||||
if len(args.edit) < 2:
|
||||
printer.error("--edit requires name and at least one regex")
|
||||
return
|
||||
self.service.update_context(args.edit[0], args.edit[1:])
|
||||
printer.success(f"Context '{args.edit[0]}' modified successfully.")
|
||||
|
||||
else:
|
||||
# Default behavior if no flags: show list
|
||||
self.dispatch_ls(args)
|
||||
|
||||
except ValueError as e:
|
||||
printer.error(str(e))
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.context_handler.ContextHandler.dispatch_ls"><code class="name flex">
|
||||
<span>def <span class="ident">dispatch_ls</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def dispatch_ls(self, args):
|
||||
contexts = self.service.list_contexts()
|
||||
for ctx in contexts:
|
||||
if ctx["active"]:
|
||||
printer.success(f"{ctx['name']} (active)")
|
||||
else:
|
||||
printer.custom(" ", ctx["name"])</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.cli" href="index.html">connpy.cli</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-classes">Classes</a></h3>
|
||||
<ul>
|
||||
<li>
|
||||
<h4><code><a title="connpy.cli.context_handler.ContextHandler" href="#connpy.cli.context_handler.ContextHandler">ContextHandler</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.cli.context_handler.ContextHandler.dispatch" href="#connpy.cli.context_handler.ContextHandler.dispatch">dispatch</a></code></li>
|
||||
<li><code><a title="connpy.cli.context_handler.ContextHandler.dispatch_ls" href="#connpy.cli.context_handler.ContextHandler.dispatch_ls">dispatch_ls</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.6</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,523 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.6">
|
||||
<title>connpy.cli.forms API documentation</title>
|
||||
<meta name="description" content="">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.cli.forms</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-classes">Classes</h2>
|
||||
<dl>
|
||||
<dt id="connpy.cli.forms.Forms"><code class="flex name class">
|
||||
<span>class <span class="ident">Forms</span></span>
|
||||
<span>(</span><span>app)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">class Forms:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
self.validators = Validators(app)
|
||||
|
||||
def questions_edit(self):
|
||||
questions = []
|
||||
questions.append(inquirer.Confirm("host", message="Edit Hostname/IP?"))
|
||||
questions.append(inquirer.Confirm("protocol", message="Edit Protocol/app?"))
|
||||
questions.append(inquirer.Confirm("port", message="Edit Port?"))
|
||||
questions.append(inquirer.Confirm("options", message="Edit Options?"))
|
||||
questions.append(inquirer.Confirm("logs", message="Edit logging path/file?"))
|
||||
questions.append(inquirer.Confirm("tags", message="Edit tags?"))
|
||||
questions.append(inquirer.Confirm("jumphost", message="Edit jumphost?"))
|
||||
questions.append(inquirer.Confirm("user", message="Edit User?"))
|
||||
questions.append(inquirer.Confirm("password", message="Edit password?"))
|
||||
return inquirer.prompt(questions)
|
||||
|
||||
def questions_nodes(self, unique, uniques=None, edit=None):
|
||||
try:
|
||||
defaults = self.app.services.nodes.get_node_details(unique)
|
||||
if "tags" not in defaults:
|
||||
defaults["tags"] = ""
|
||||
if "jumphost" not in defaults:
|
||||
defaults["jumphost"] = ""
|
||||
except Exception:
|
||||
defaults = {"host": "", "protocol": "", "port": "", "user": "", "options": "", "logs": "", "tags": "", "password": "", "jumphost": ""}
|
||||
node = {}
|
||||
if edit is None:
|
||||
edit = {"host": True, "protocol": True, "port": True, "user": True, "password": True, "options": True, "logs": True, "tags": True, "jumphost": True}
|
||||
questions = []
|
||||
if edit["host"]:
|
||||
questions.append(inquirer.Text("host", message="Add Hostname or IP", validate=self.validators.host_validation, default=defaults["host"]))
|
||||
else:
|
||||
node["host"] = defaults["host"]
|
||||
if edit["protocol"]:
|
||||
questions.append(inquirer.Text("protocol", message="Select Protocol/app", validate=self.validators.protocol_validation, default=defaults["protocol"]))
|
||||
else:
|
||||
node["protocol"] = defaults["protocol"]
|
||||
if edit["port"]:
|
||||
questions.append(inquirer.Text("port", message="Select Port Number", validate=self.validators.port_validation, default=defaults["port"]))
|
||||
else:
|
||||
node["port"] = defaults["port"]
|
||||
if edit["options"]:
|
||||
questions.append(inquirer.Text("options", message="Pass extra options to protocol/app", validate=self.validators.default_validation, default=defaults["options"]))
|
||||
else:
|
||||
node["options"] = defaults["options"]
|
||||
if edit["logs"]:
|
||||
questions.append(inquirer.Text("logs", message="Pick logging path/file ", validate=self.validators.default_validation, default=defaults["logs"].replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
node["logs"] = defaults["logs"]
|
||||
if edit["tags"]:
|
||||
questions.append(inquirer.Text("tags", message="Add tags dictionary", validate=self.validators.tags_validation, default=str(defaults["tags"]).replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
node["tags"] = defaults["tags"]
|
||||
if edit["jumphost"]:
|
||||
questions.append(inquirer.Text("jumphost", message="Add Jumphost node", validate=self.validators.jumphost_validation, default=str(defaults["jumphost"]).replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
node["jumphost"] = defaults["jumphost"]
|
||||
if edit["user"]:
|
||||
questions.append(inquirer.Text("user", message="Pick username", validate=self.validators.default_validation, default=defaults["user"]))
|
||||
else:
|
||||
node["user"] = defaults["user"]
|
||||
if edit["password"]:
|
||||
questions.append(inquirer.List("password", message="Password: Use a local password, no password or a list of profiles to reference?", choices=["Local Password", "Profiles", "No Password"]))
|
||||
else:
|
||||
node["password"] = defaults["password"]
|
||||
|
||||
answer = inquirer.prompt(questions)
|
||||
if answer is None:
|
||||
return False
|
||||
|
||||
if "password" in answer:
|
||||
if answer["password"] == "Local Password":
|
||||
passq = [inquirer.Password("password", message="Set Password")]
|
||||
passa = inquirer.prompt(passq)
|
||||
if passa is None:
|
||||
return False
|
||||
answer["password"] = self.app.services.config_svc.encrypt_password(passa["password"])
|
||||
elif answer["password"] == "Profiles":
|
||||
passq = [(inquirer.Text("password", message="Set a @profile or a comma separated list of @profiles", validate=self.validators.pass_validation))]
|
||||
passa = inquirer.prompt(passq)
|
||||
if passa is None:
|
||||
return False
|
||||
answer["password"] = passa["password"].split(",")
|
||||
elif answer["password"] == "No Password":
|
||||
answer["password"] = ""
|
||||
|
||||
if "tags" in answer and not answer["tags"].startswith("@") and answer["tags"]:
|
||||
answer["tags"] = ast.literal_eval(answer["tags"])
|
||||
|
||||
result = {**uniques, **answer, **node}
|
||||
result["type"] = "connection"
|
||||
return result
|
||||
|
||||
def questions_profiles(self, unique, edit=None):
|
||||
try:
|
||||
defaults = self.app.services.profiles.get_profile(unique, resolve=False)
|
||||
if "tags" not in defaults:
|
||||
defaults["tags"] = ""
|
||||
if "jumphost" not in defaults:
|
||||
defaults["jumphost"] = ""
|
||||
except Exception:
|
||||
defaults = {"host": "", "protocol": "", "port": "", "user": "", "options": "", "logs": "", "tags": "", "jumphost": ""}
|
||||
profile = {}
|
||||
if edit is None:
|
||||
edit = {"host": True, "protocol": True, "port": True, "user": True, "password": True, "options": True, "logs": True, "tags": True, "jumphost": True}
|
||||
questions = []
|
||||
if edit["host"]:
|
||||
questions.append(inquirer.Text("host", message="Add Hostname or IP", default=defaults["host"]))
|
||||
else:
|
||||
profile["host"] = defaults["host"]
|
||||
if edit["protocol"]:
|
||||
questions.append(inquirer.Text("protocol", message="Select Protocol/app", validate=self.validators.profile_protocol_validation, default=defaults["protocol"]))
|
||||
else:
|
||||
profile["protocol"] = defaults["protocol"]
|
||||
if edit["port"]:
|
||||
questions.append(inquirer.Text("port", message="Select Port Number", validate=self.validators.profile_port_validation, default=defaults["port"]))
|
||||
else:
|
||||
profile["port"] = defaults["port"]
|
||||
if edit["options"]:
|
||||
questions.append(inquirer.Text("options", message="Pass extra options to protocol/app", default=defaults["options"]))
|
||||
else:
|
||||
profile["options"] = defaults["options"]
|
||||
if edit["logs"]:
|
||||
questions.append(inquirer.Text("logs", message="Pick logging path/file ", default=defaults["logs"].replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
profile["logs"] = defaults["logs"]
|
||||
if edit["tags"]:
|
||||
questions.append(inquirer.Text("tags", message="Add tags dictionary", validate=self.validators.profile_tags_validation, default=str(defaults["tags"]).replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
profile["tags"] = defaults["tags"]
|
||||
if edit["jumphost"]:
|
||||
questions.append(inquirer.Text("jumphost", message="Add Jumphost node", validate=self.validators.profile_jumphost_validation, default=str(defaults["jumphost"]).replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
profile["jumphost"] = defaults["jumphost"]
|
||||
if edit["user"]:
|
||||
questions.append(inquirer.Text("user", message="Pick username", default=defaults["user"]))
|
||||
else:
|
||||
profile["user"] = defaults["user"]
|
||||
if edit["password"]:
|
||||
questions.append(inquirer.Password("password", message="Set Password"))
|
||||
else:
|
||||
profile["password"] = defaults["password"]
|
||||
|
||||
answer = inquirer.prompt(questions)
|
||||
if answer is None:
|
||||
return False
|
||||
|
||||
if "password" in answer:
|
||||
if answer["password"] != "":
|
||||
answer["password"] = self.app.services.config_svc.encrypt_password(answer["password"])
|
||||
|
||||
if "tags" in answer and answer["tags"]:
|
||||
answer["tags"] = ast.literal_eval(answer["tags"])
|
||||
|
||||
result = {**answer, **profile}
|
||||
result["id"] = unique
|
||||
return result
|
||||
|
||||
def questions_bulk(self, nodes="", hosts=""):
|
||||
questions = []
|
||||
questions.append(inquirer.Text("ids", message="add a comma separated list of nodes to add", default=nodes, validate=self.validators.bulk_node_validation))
|
||||
questions.append(inquirer.Text("location", message="Add a @folder, @subfolder@folder or leave empty", validate=self.validators.bulk_folder_validation))
|
||||
questions.append(inquirer.Text("host", message="Add comma separated list of Hostnames or IPs", default=hosts, validate=self.validators.bulk_host_validation))
|
||||
questions.append(inquirer.Text("protocol", message="Select Protocol/app", validate=self.validators.protocol_validation))
|
||||
questions.append(inquirer.Text("port", message="Select Port Number", validate=self.validators.port_validation))
|
||||
questions.append(inquirer.Text("options", message="Pass extra options to protocol/app", validate=self.validators.default_validation))
|
||||
questions.append(inquirer.Text("logs", message="Pick logging path/file ", validate=self.validators.default_validation))
|
||||
questions.append(inquirer.Text("tags", message="Add tags dictionary", validate=self.validators.tags_validation))
|
||||
questions.append(inquirer.Text("jumphost", message="Add Jumphost node", validate=self.validators.jumphost_validation))
|
||||
questions.append(inquirer.Text("user", message="Pick username", validate=self.validators.default_validation))
|
||||
questions.append(inquirer.List("password", message="Password: Use a local password, no password or a list of profiles to reference?", choices=["Local Password", "Profiles", "No Password"]))
|
||||
|
||||
answer = inquirer.prompt(questions)
|
||||
if answer is None:
|
||||
return False
|
||||
|
||||
if "password" in answer:
|
||||
if answer["password"] == "Local Password":
|
||||
passq = [inquirer.Password("password", message="Set Password")]
|
||||
passa = inquirer.prompt(passq)
|
||||
answer["password"] = self.app.services.config_svc.encrypt_password(passa["password"])
|
||||
elif answer["password"] == "Profiles":
|
||||
passq = [(inquirer.Text("password", message="Set a @profile or a comma separated list of @profiles", validate=self.validators.pass_validation))]
|
||||
passa = inquirer.prompt(passq)
|
||||
answer["password"] = passa["password"].split(",")
|
||||
elif answer["password"] == "No Password":
|
||||
answer["password"] = ""
|
||||
|
||||
answer["type"] = "connection"
|
||||
if "tags" in answer and not answer["tags"].startswith("@") and answer["tags"]:
|
||||
answer["tags"] = ast.literal_eval(answer["tags"])
|
||||
|
||||
return answer</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
<h3>Methods</h3>
|
||||
<dl>
|
||||
<dt id="connpy.cli.forms.Forms.questions_bulk"><code class="name flex">
|
||||
<span>def <span class="ident">questions_bulk</span></span>(<span>self, nodes='', hosts='')</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def questions_bulk(self, nodes="", hosts=""):
|
||||
questions = []
|
||||
questions.append(inquirer.Text("ids", message="add a comma separated list of nodes to add", default=nodes, validate=self.validators.bulk_node_validation))
|
||||
questions.append(inquirer.Text("location", message="Add a @folder, @subfolder@folder or leave empty", validate=self.validators.bulk_folder_validation))
|
||||
questions.append(inquirer.Text("host", message="Add comma separated list of Hostnames or IPs", default=hosts, validate=self.validators.bulk_host_validation))
|
||||
questions.append(inquirer.Text("protocol", message="Select Protocol/app", validate=self.validators.protocol_validation))
|
||||
questions.append(inquirer.Text("port", message="Select Port Number", validate=self.validators.port_validation))
|
||||
questions.append(inquirer.Text("options", message="Pass extra options to protocol/app", validate=self.validators.default_validation))
|
||||
questions.append(inquirer.Text("logs", message="Pick logging path/file ", validate=self.validators.default_validation))
|
||||
questions.append(inquirer.Text("tags", message="Add tags dictionary", validate=self.validators.tags_validation))
|
||||
questions.append(inquirer.Text("jumphost", message="Add Jumphost node", validate=self.validators.jumphost_validation))
|
||||
questions.append(inquirer.Text("user", message="Pick username", validate=self.validators.default_validation))
|
||||
questions.append(inquirer.List("password", message="Password: Use a local password, no password or a list of profiles to reference?", choices=["Local Password", "Profiles", "No Password"]))
|
||||
|
||||
answer = inquirer.prompt(questions)
|
||||
if answer is None:
|
||||
return False
|
||||
|
||||
if "password" in answer:
|
||||
if answer["password"] == "Local Password":
|
||||
passq = [inquirer.Password("password", message="Set Password")]
|
||||
passa = inquirer.prompt(passq)
|
||||
answer["password"] = self.app.services.config_svc.encrypt_password(passa["password"])
|
||||
elif answer["password"] == "Profiles":
|
||||
passq = [(inquirer.Text("password", message="Set a @profile or a comma separated list of @profiles", validate=self.validators.pass_validation))]
|
||||
passa = inquirer.prompt(passq)
|
||||
answer["password"] = passa["password"].split(",")
|
||||
elif answer["password"] == "No Password":
|
||||
answer["password"] = ""
|
||||
|
||||
answer["type"] = "connection"
|
||||
if "tags" in answer and not answer["tags"].startswith("@") and answer["tags"]:
|
||||
answer["tags"] = ast.literal_eval(answer["tags"])
|
||||
|
||||
return answer</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.forms.Forms.questions_edit"><code class="name flex">
|
||||
<span>def <span class="ident">questions_edit</span></span>(<span>self)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def questions_edit(self):
|
||||
questions = []
|
||||
questions.append(inquirer.Confirm("host", message="Edit Hostname/IP?"))
|
||||
questions.append(inquirer.Confirm("protocol", message="Edit Protocol/app?"))
|
||||
questions.append(inquirer.Confirm("port", message="Edit Port?"))
|
||||
questions.append(inquirer.Confirm("options", message="Edit Options?"))
|
||||
questions.append(inquirer.Confirm("logs", message="Edit logging path/file?"))
|
||||
questions.append(inquirer.Confirm("tags", message="Edit tags?"))
|
||||
questions.append(inquirer.Confirm("jumphost", message="Edit jumphost?"))
|
||||
questions.append(inquirer.Confirm("user", message="Edit User?"))
|
||||
questions.append(inquirer.Confirm("password", message="Edit password?"))
|
||||
return inquirer.prompt(questions)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.forms.Forms.questions_nodes"><code class="name flex">
|
||||
<span>def <span class="ident">questions_nodes</span></span>(<span>self, unique, uniques=None, edit=None)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def questions_nodes(self, unique, uniques=None, edit=None):
|
||||
try:
|
||||
defaults = self.app.services.nodes.get_node_details(unique)
|
||||
if "tags" not in defaults:
|
||||
defaults["tags"] = ""
|
||||
if "jumphost" not in defaults:
|
||||
defaults["jumphost"] = ""
|
||||
except Exception:
|
||||
defaults = {"host": "", "protocol": "", "port": "", "user": "", "options": "", "logs": "", "tags": "", "password": "", "jumphost": ""}
|
||||
node = {}
|
||||
if edit is None:
|
||||
edit = {"host": True, "protocol": True, "port": True, "user": True, "password": True, "options": True, "logs": True, "tags": True, "jumphost": True}
|
||||
questions = []
|
||||
if edit["host"]:
|
||||
questions.append(inquirer.Text("host", message="Add Hostname or IP", validate=self.validators.host_validation, default=defaults["host"]))
|
||||
else:
|
||||
node["host"] = defaults["host"]
|
||||
if edit["protocol"]:
|
||||
questions.append(inquirer.Text("protocol", message="Select Protocol/app", validate=self.validators.protocol_validation, default=defaults["protocol"]))
|
||||
else:
|
||||
node["protocol"] = defaults["protocol"]
|
||||
if edit["port"]:
|
||||
questions.append(inquirer.Text("port", message="Select Port Number", validate=self.validators.port_validation, default=defaults["port"]))
|
||||
else:
|
||||
node["port"] = defaults["port"]
|
||||
if edit["options"]:
|
||||
questions.append(inquirer.Text("options", message="Pass extra options to protocol/app", validate=self.validators.default_validation, default=defaults["options"]))
|
||||
else:
|
||||
node["options"] = defaults["options"]
|
||||
if edit["logs"]:
|
||||
questions.append(inquirer.Text("logs", message="Pick logging path/file ", validate=self.validators.default_validation, default=defaults["logs"].replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
node["logs"] = defaults["logs"]
|
||||
if edit["tags"]:
|
||||
questions.append(inquirer.Text("tags", message="Add tags dictionary", validate=self.validators.tags_validation, default=str(defaults["tags"]).replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
node["tags"] = defaults["tags"]
|
||||
if edit["jumphost"]:
|
||||
questions.append(inquirer.Text("jumphost", message="Add Jumphost node", validate=self.validators.jumphost_validation, default=str(defaults["jumphost"]).replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
node["jumphost"] = defaults["jumphost"]
|
||||
if edit["user"]:
|
||||
questions.append(inquirer.Text("user", message="Pick username", validate=self.validators.default_validation, default=defaults["user"]))
|
||||
else:
|
||||
node["user"] = defaults["user"]
|
||||
if edit["password"]:
|
||||
questions.append(inquirer.List("password", message="Password: Use a local password, no password or a list of profiles to reference?", choices=["Local Password", "Profiles", "No Password"]))
|
||||
else:
|
||||
node["password"] = defaults["password"]
|
||||
|
||||
answer = inquirer.prompt(questions)
|
||||
if answer is None:
|
||||
return False
|
||||
|
||||
if "password" in answer:
|
||||
if answer["password"] == "Local Password":
|
||||
passq = [inquirer.Password("password", message="Set Password")]
|
||||
passa = inquirer.prompt(passq)
|
||||
if passa is None:
|
||||
return False
|
||||
answer["password"] = self.app.services.config_svc.encrypt_password(passa["password"])
|
||||
elif answer["password"] == "Profiles":
|
||||
passq = [(inquirer.Text("password", message="Set a @profile or a comma separated list of @profiles", validate=self.validators.pass_validation))]
|
||||
passa = inquirer.prompt(passq)
|
||||
if passa is None:
|
||||
return False
|
||||
answer["password"] = passa["password"].split(",")
|
||||
elif answer["password"] == "No Password":
|
||||
answer["password"] = ""
|
||||
|
||||
if "tags" in answer and not answer["tags"].startswith("@") and answer["tags"]:
|
||||
answer["tags"] = ast.literal_eval(answer["tags"])
|
||||
|
||||
result = {**uniques, **answer, **node}
|
||||
result["type"] = "connection"
|
||||
return result</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.forms.Forms.questions_profiles"><code class="name flex">
|
||||
<span>def <span class="ident">questions_profiles</span></span>(<span>self, unique, edit=None)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def questions_profiles(self, unique, edit=None):
|
||||
try:
|
||||
defaults = self.app.services.profiles.get_profile(unique, resolve=False)
|
||||
if "tags" not in defaults:
|
||||
defaults["tags"] = ""
|
||||
if "jumphost" not in defaults:
|
||||
defaults["jumphost"] = ""
|
||||
except Exception:
|
||||
defaults = {"host": "", "protocol": "", "port": "", "user": "", "options": "", "logs": "", "tags": "", "jumphost": ""}
|
||||
profile = {}
|
||||
if edit is None:
|
||||
edit = {"host": True, "protocol": True, "port": True, "user": True, "password": True, "options": True, "logs": True, "tags": True, "jumphost": True}
|
||||
questions = []
|
||||
if edit["host"]:
|
||||
questions.append(inquirer.Text("host", message="Add Hostname or IP", default=defaults["host"]))
|
||||
else:
|
||||
profile["host"] = defaults["host"]
|
||||
if edit["protocol"]:
|
||||
questions.append(inquirer.Text("protocol", message="Select Protocol/app", validate=self.validators.profile_protocol_validation, default=defaults["protocol"]))
|
||||
else:
|
||||
profile["protocol"] = defaults["protocol"]
|
||||
if edit["port"]:
|
||||
questions.append(inquirer.Text("port", message="Select Port Number", validate=self.validators.profile_port_validation, default=defaults["port"]))
|
||||
else:
|
||||
profile["port"] = defaults["port"]
|
||||
if edit["options"]:
|
||||
questions.append(inquirer.Text("options", message="Pass extra options to protocol/app", default=defaults["options"]))
|
||||
else:
|
||||
profile["options"] = defaults["options"]
|
||||
if edit["logs"]:
|
||||
questions.append(inquirer.Text("logs", message="Pick logging path/file ", default=defaults["logs"].replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
profile["logs"] = defaults["logs"]
|
||||
if edit["tags"]:
|
||||
questions.append(inquirer.Text("tags", message="Add tags dictionary", validate=self.validators.profile_tags_validation, default=str(defaults["tags"]).replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
profile["tags"] = defaults["tags"]
|
||||
if edit["jumphost"]:
|
||||
questions.append(inquirer.Text("jumphost", message="Add Jumphost node", validate=self.validators.profile_jumphost_validation, default=str(defaults["jumphost"]).replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
profile["jumphost"] = defaults["jumphost"]
|
||||
if edit["user"]:
|
||||
questions.append(inquirer.Text("user", message="Pick username", default=defaults["user"]))
|
||||
else:
|
||||
profile["user"] = defaults["user"]
|
||||
if edit["password"]:
|
||||
questions.append(inquirer.Password("password", message="Set Password"))
|
||||
else:
|
||||
profile["password"] = defaults["password"]
|
||||
|
||||
answer = inquirer.prompt(questions)
|
||||
if answer is None:
|
||||
return False
|
||||
|
||||
if "password" in answer:
|
||||
if answer["password"] != "":
|
||||
answer["password"] = self.app.services.config_svc.encrypt_password(answer["password"])
|
||||
|
||||
if "tags" in answer and answer["tags"]:
|
||||
answer["tags"] = ast.literal_eval(answer["tags"])
|
||||
|
||||
result = {**answer, **profile}
|
||||
result["id"] = unique
|
||||
return result</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.cli" href="index.html">connpy.cli</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-classes">Classes</a></h3>
|
||||
<ul>
|
||||
<li>
|
||||
<h4><code><a title="connpy.cli.forms.Forms" href="#connpy.cli.forms.Forms">Forms</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.cli.forms.Forms.questions_bulk" href="#connpy.cli.forms.Forms.questions_bulk">questions_bulk</a></code></li>
|
||||
<li><code><a title="connpy.cli.forms.Forms.questions_edit" href="#connpy.cli.forms.Forms.questions_edit">questions_edit</a></code></li>
|
||||
<li><code><a title="connpy.cli.forms.Forms.questions_nodes" href="#connpy.cli.forms.Forms.questions_nodes">questions_nodes</a></code></li>
|
||||
<li><code><a title="connpy.cli.forms.Forms.questions_profiles" href="#connpy.cli.forms.Forms.questions_profiles">questions_profiles</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.6</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,309 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.6">
|
||||
<title>connpy.cli.help_text API documentation</title>
|
||||
<meta name="description" content="">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.cli.help_text</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-functions">Functions</h2>
|
||||
<dl>
|
||||
<dt id="connpy.cli.help_text.get_help"><code class="name flex">
|
||||
<span>def <span class="ident">get_help</span></span>(<span>type, parsers=None)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def get_help(type, parsers=None):
|
||||
if type == "export":
|
||||
return "Export /path/to/file.yml \[@subfolder1]\[@folder1] \[@subfolderN]\[@folderN]"
|
||||
if type == "import":
|
||||
return "Import /path/to/file.yml"
|
||||
if type == "node":
|
||||
return "node\[@subfolder]\[@folder]\nConnect to specific node or show all matching nodes\n\[@subfolder]\[@folder]\nShow all available connections globally or in specified path"
|
||||
if type == "usage":
|
||||
commands = []
|
||||
for subcommand, subparser in parsers.choices.items():
|
||||
if subparser.description != None:
|
||||
commands.append(subcommand)
|
||||
commands = ",".join(commands)
|
||||
usage_help = f"connpy [-h] [--add | --del | --mod | --show | --debug] [node|folder] [--sftp]\n connpy {{{commands}}} ..."
|
||||
return usage_help
|
||||
return get_instructions(type)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.help_text.get_instructions"><code class="name flex">
|
||||
<span>def <span class="ident">get_instructions</span></span>(<span>type='add')</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def get_instructions(type="add"):
|
||||
if type == "add":
|
||||
return """
|
||||
Welcome to Connpy node Addition Wizard!
|
||||
|
||||
Here are some important instructions and tips for configuring your new node:
|
||||
|
||||
1. **Profiles**:
|
||||
- You can use the configured settings in a profile using `@profilename`.
|
||||
|
||||
2. **Available Protocols and Apps**:
|
||||
- ssh
|
||||
- telnet
|
||||
- kubectl (`kubectl exec`)
|
||||
- docker (`docker exec`)
|
||||
- ssm (`aws ssm start-session`)
|
||||
|
||||
3. **Optional Values**:
|
||||
- You can leave any value empty except for the hostname/IP.
|
||||
|
||||
4. **Passwords**:
|
||||
- You can pass one or more passwords using comma-separated `@profiles`.
|
||||
|
||||
5. **Logging**:
|
||||
- You can use the following variables in the logging file name:
|
||||
- `${id}`
|
||||
- `${unique}`
|
||||
- `${host}`
|
||||
- `${port}`
|
||||
- `${user}`
|
||||
- `${protocol}`
|
||||
|
||||
6. **Well-Known Tags**:
|
||||
- `os`: Identified by AI to generate commands based on the operating system.
|
||||
- `screen_length_command`: Used by automation to avoid pagination on different devices (e.g., `terminal length 0` for Cisco devices).
|
||||
- `prompt`: Replaces default app prompt to identify the end of output or where the user can start inputting commands.
|
||||
- `kube_command`: Replaces the default command (`/bin/bash`) for `kubectl exec`.
|
||||
- `docker_command`: Replaces the default command for `docker exec`.
|
||||
- `region`: AWS Region used for `aws ssm start-session`.
|
||||
- `profile`: AWS Profile used for `aws ssm start-session`.
|
||||
- `ssh_options`: Additional SSH options injected when an SSM node is used as a jumphost (e.g., `-i ~/.ssh/key.pem`).
|
||||
- `nc_command`: Replaces the default `nc` command used when bridging connections through Docker or Kubernetes (e.g., `ip netns exec global-vrf nc`).
|
||||
"""
|
||||
if type == "bashcompletion":
|
||||
return '''
|
||||
# Bash completion for connpy
|
||||
# Run: eval "$(connpy config --completion bash)"
|
||||
# Or add it to your .bashrc
|
||||
|
||||
_connpy_autocomplete()
|
||||
{
|
||||
local strings
|
||||
strings=$(python3 -m connpy.completion bash ${#COMP_WORDS[@]} "${COMP_WORDS[@]}")
|
||||
|
||||
local IFS=$'\\t'
|
||||
COMPREPLY=( $(compgen -W "$strings" -- "${COMP_WORDS[$COMP_CWORD]}") )
|
||||
}
|
||||
complete -o nosort -F _connpy_autocomplete conn
|
||||
complete -o nosort -F _connpy_autocomplete connpy
|
||||
'''
|
||||
if type == "zshcompletion":
|
||||
return '''
|
||||
# Zsh completion for connpy
|
||||
# Run: eval "$(connpy config --completion zsh)"
|
||||
# Or add it to your .zshrc
|
||||
# Make sure compinit is loaded
|
||||
|
||||
autoload -U compinit && compinit
|
||||
_connpy_autocomplete()
|
||||
{
|
||||
local COMP_WORDS num strings
|
||||
COMP_WORDS=( $words )
|
||||
num=${#COMP_WORDS[@]}
|
||||
if [[ $words =~ '.* $' ]]; then
|
||||
num=$(($num + 1))
|
||||
fi
|
||||
strings=$(python3 -m connpy.completion zsh ${num} ${COMP_WORDS[@]})
|
||||
|
||||
local IFS=$'\\t'
|
||||
compadd "$@" -- ${=strings}
|
||||
}
|
||||
compdef _connpy_autocomplete conn
|
||||
compdef _connpy_autocomplete connpy
|
||||
'''
|
||||
if type == "fzf_wrapper_bash":
|
||||
return '''\n#Here starts bash 0ms fzf wrapper for connpy
|
||||
connpy() {
|
||||
if [ $# -eq 0 ]; then
|
||||
local selected
|
||||
local configdir=$(cat ~/.config/conn/.folder 2>/dev/null || echo ~/.config/conn)
|
||||
if [ -s "$configdir/.fzf_nodes_cache.txt" ]; then
|
||||
selected=$(cat "$configdir/.fzf_nodes_cache.txt" | fzf-tmux -i -d 25%)
|
||||
else
|
||||
command connpy
|
||||
return
|
||||
fi
|
||||
if [ -n "$selected" ]; then
|
||||
command connpy "$selected"
|
||||
fi
|
||||
else
|
||||
command connpy "$@"
|
||||
fi
|
||||
}
|
||||
alias c="connpy"
|
||||
#Here ends bash 0ms fzf wrapper for connpy
|
||||
'''
|
||||
if type == "fzf_wrapper_zsh":
|
||||
return '''\n#Here starts zsh 0ms fzf wrapper for connpy
|
||||
connpy() {
|
||||
if [ $# -eq 0 ]; then
|
||||
local selected
|
||||
local configdir=$(cat ~/.config/conn/.folder 2>/dev/null || echo ~/.config/conn)
|
||||
if [ -s "$configdir/.fzf_nodes_cache.txt" ]; then
|
||||
selected=$(cat "$configdir/.fzf_nodes_cache.txt" | fzf-tmux -i -d 25%)
|
||||
else
|
||||
command connpy
|
||||
return
|
||||
fi
|
||||
if [ -n "$selected" ]; then
|
||||
command connpy "$selected"
|
||||
fi
|
||||
else
|
||||
command connpy "$@"
|
||||
fi
|
||||
}
|
||||
alias c="connpy"
|
||||
#Here ends zsh 0ms fzf wrapper for connpy
|
||||
'''
|
||||
if type == "run":
|
||||
return "node[@subfolder][@folder] commmand to run\nRun the specific command on the node and print output\n/path/to/file.yaml\nUse a yaml file to run an automation script"
|
||||
if type == "generate":
|
||||
return r'''---
|
||||
tasks:
|
||||
- name: "Config"
|
||||
|
||||
action: 'run' #Action can be test or run. Mandatory
|
||||
|
||||
nodes: #List of nodes to work on. Mandatory
|
||||
- 'router1@office' #You can add specific nodes
|
||||
- '@aws' #entire folders or subfolders
|
||||
- 'router.*@office' #or use regex to filter inside a folder
|
||||
|
||||
commands: #List of commands to send, use {name} to pass variables
|
||||
- 'term len 0'
|
||||
- 'conf t'
|
||||
- 'interface {if}'
|
||||
- 'ip address 10.100.100.{id} 255.255.255.255'
|
||||
- '{commit}'
|
||||
- 'end'
|
||||
|
||||
variables: #Variables to use on commands and expected. Optional
|
||||
__global__: #Global variables to use on all nodes, fallback if missing in the node.
|
||||
commit: ''
|
||||
if: 'loopback100'
|
||||
router1@office:
|
||||
id: 1
|
||||
router2@office:
|
||||
id: 2
|
||||
commit: 'commit'
|
||||
router3@office:
|
||||
id: 3
|
||||
vrouter1@aws:
|
||||
id: 4
|
||||
vrouterN@aws:
|
||||
id: 5
|
||||
|
||||
output: /home/user/logs #Type of output, if null you only get Connection and test result. Choices are: null,stdout,/path/to/folder. Folder path works on both 'run' and 'test' actions.
|
||||
|
||||
options:
|
||||
prompt: r'>$|#$|\$$|>.$|#.$|\$.$' #Optional prompt to check on your devices, default should work on most devices.
|
||||
parallel: 10 #Optional number of nodes to run commands on parallel. Default 10.
|
||||
timeout: 20 #Optional time to wait in seconds for prompt, expected or EOF. Default 20.
|
||||
|
||||
- name: "TestConfig"
|
||||
action: 'test'
|
||||
nodes:
|
||||
- 'router1@office'
|
||||
- '@aws'
|
||||
commands:
|
||||
- 'ping 10.100.100.{id}'
|
||||
expected: '!' #Expected text to find when running test action. Mandatory for 'test'
|
||||
variables:
|
||||
router1@office:
|
||||
id: 1
|
||||
router2@office:
|
||||
id: 2
|
||||
commit: 'commit'
|
||||
router3@office:
|
||||
id: 3
|
||||
vrouter1@aws:
|
||||
id: 4
|
||||
vrouterN@aws:
|
||||
id: 5
|
||||
output: null
|
||||
...'''
|
||||
return ""</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.cli" href="index.html">connpy.cli</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-functions">Functions</a></h3>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.cli.help_text.get_help" href="#connpy.cli.help_text.get_help">get_help</a></code></li>
|
||||
<li><code><a title="connpy.cli.help_text.get_instructions" href="#connpy.cli.help_text.get_instructions">get_instructions</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.6</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,213 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.6">
|
||||
<title>connpy.cli.helpers API documentation</title>
|
||||
<meta name="description" content="">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.cli.helpers</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-functions">Functions</h2>
|
||||
<dl>
|
||||
<dt id="connpy.cli.helpers.choose"><code class="name flex">
|
||||
<span>def <span class="ident">choose</span></span>(<span>app, list_, name, action)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def choose(app, list_, name, action):
|
||||
# Generates an inquirer list to pick
|
||||
# Safeguard: Never prompt if running in autocomplete shell
|
||||
if os.environ.get("_ARGCOMPLETE") or os.environ.get("COMP_LINE"):
|
||||
return None
|
||||
|
||||
if FzfPrompt and app.fzf and os.environ.get("_ARGCOMPLETE") is None and os.environ.get("COMP_LINE") is None:
|
||||
fzf_prompt = FzfPrompt(executable_path="fzf-tmux")
|
||||
if not app.case:
|
||||
fzf_prompt = FzfPrompt(executable_path="fzf-tmux -i")
|
||||
answer = fzf_prompt.prompt(list_, fzf_options="-d 25%")
|
||||
if len(answer) == 0:
|
||||
return None
|
||||
else:
|
||||
return answer[0]
|
||||
else:
|
||||
questions = [inquirer.List(name, message="Pick {} to {}:".format(name,action), choices=list_, carousel=True)]
|
||||
answer = inquirer.prompt(questions)
|
||||
if answer == None:
|
||||
return None
|
||||
else:
|
||||
return answer[name]</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.helpers.folders_completer"><code class="name flex">
|
||||
<span>def <span class="ident">folders_completer</span></span>(<span>prefix, parsed_args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def folders_completer(prefix, parsed_args, **kwargs):
|
||||
configdir = get_config_dir()
|
||||
cache_file = os.path.join(configdir, '.folders_cache.txt')
|
||||
if os.path.exists(cache_file):
|
||||
with open(cache_file, "r") as f:
|
||||
return [line.strip() for line in f if line.startswith(prefix)]
|
||||
return []</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.helpers.get_config_dir"><code class="name flex">
|
||||
<span>def <span class="ident">get_config_dir</span></span>(<span>)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def get_config_dir():
|
||||
home = os.path.expanduser("~")
|
||||
defaultdir = os.path.join(home, '.config/conn')
|
||||
pathfile = os.path.join(defaultdir, '.folder')
|
||||
try:
|
||||
with open(pathfile, "r") as f:
|
||||
return f.read().strip()
|
||||
except:
|
||||
return defaultdir</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.helpers.nodes_completer"><code class="name flex">
|
||||
<span>def <span class="ident">nodes_completer</span></span>(<span>prefix, parsed_args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def nodes_completer(prefix, parsed_args, **kwargs):
|
||||
configdir = get_config_dir()
|
||||
cache_file = os.path.join(configdir, '.fzf_nodes_cache.txt')
|
||||
if os.path.exists(cache_file):
|
||||
with open(cache_file, "r") as f:
|
||||
return [line.strip() for line in f if line.startswith(prefix)]
|
||||
return []</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.helpers.profiles_completer"><code class="name flex">
|
||||
<span>def <span class="ident">profiles_completer</span></span>(<span>prefix, parsed_args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def profiles_completer(prefix, parsed_args, **kwargs):
|
||||
configdir = get_config_dir()
|
||||
cache_file = os.path.join(configdir, '.profiles_cache.txt')
|
||||
if os.path.exists(cache_file):
|
||||
with open(cache_file, "r") as f:
|
||||
return [line.strip() for line in f if line.startswith(prefix)]
|
||||
return []</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.helpers.toplevel_completer"><code class="name flex">
|
||||
<span>def <span class="ident">toplevel_completer</span></span>(<span>prefix, parsed_args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def toplevel_completer(prefix, parsed_args, **kwargs):
|
||||
commands = ["node", "profile", "move", "mv", "copy", "cp", "list", "ls", "bulk", "export", "import", "ai", "run", "api", "context", "plugin", "config", "sync"]
|
||||
|
||||
configdir = get_config_dir()
|
||||
cache_file = os.path.join(configdir, '.fzf_nodes_cache.txt')
|
||||
nodes = []
|
||||
if os.path.exists(cache_file):
|
||||
with open(cache_file, "r") as f:
|
||||
nodes = [line.strip() for line in f if line.startswith(prefix)]
|
||||
|
||||
cache_folders = os.path.join(configdir, '.folders_cache.txt')
|
||||
if os.path.exists(cache_folders):
|
||||
with open(cache_folders, "r") as f:
|
||||
nodes += [line.strip() for line in f if line.startswith(prefix)]
|
||||
|
||||
return [c for c in commands + nodes if c.startswith(prefix)]</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.cli" href="index.html">connpy.cli</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-functions">Functions</a></h3>
|
||||
<ul class="two-column">
|
||||
<li><code><a title="connpy.cli.helpers.choose" href="#connpy.cli.helpers.choose">choose</a></code></li>
|
||||
<li><code><a title="connpy.cli.helpers.folders_completer" href="#connpy.cli.helpers.folders_completer">folders_completer</a></code></li>
|
||||
<li><code><a title="connpy.cli.helpers.get_config_dir" href="#connpy.cli.helpers.get_config_dir">get_config_dir</a></code></li>
|
||||
<li><code><a title="connpy.cli.helpers.nodes_completer" href="#connpy.cli.helpers.nodes_completer">nodes_completer</a></code></li>
|
||||
<li><code><a title="connpy.cli.helpers.profiles_completer" href="#connpy.cli.helpers.profiles_completer">profiles_completer</a></code></li>
|
||||
<li><code><a title="connpy.cli.helpers.toplevel_completer" href="#connpy.cli.helpers.toplevel_completer">toplevel_completer</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.6</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,278 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.6">
|
||||
<title>connpy.cli.import_export_handler API documentation</title>
|
||||
<meta name="description" content="">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.cli.import_export_handler</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-classes">Classes</h2>
|
||||
<dl>
|
||||
<dt id="connpy.cli.import_export_handler.ImportExportHandler"><code class="flex name class">
|
||||
<span>class <span class="ident">ImportExportHandler</span></span>
|
||||
<span>(</span><span>app)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">class ImportExportHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
self.forms = Forms(app)
|
||||
|
||||
def dispatch_import(self, args):
|
||||
file_path = args.data[0]
|
||||
try:
|
||||
printer.warning("This could overwrite your current configuration!")
|
||||
question = [inquirer.Confirm("import", message=f"Are you sure you want to import {file_path}?")]
|
||||
confirm = inquirer.prompt(question)
|
||||
if confirm == None or not confirm["import"]:
|
||||
sys.exit(7)
|
||||
|
||||
self.app.services.import_export.import_from_file(file_path)
|
||||
printer.success(f"File {file_path} imported successfully.")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def dispatch_export(self, args):
|
||||
file_path = args.data[0]
|
||||
folders = args.data[1:] if len(args.data) > 1 else None
|
||||
try:
|
||||
self.app.services.import_export.export_to_file(file_path, folders=folders)
|
||||
printer.success(f"File {file_path} generated successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
sys.exit()
|
||||
|
||||
def bulk(self, args):
|
||||
if args.file and os.path.isfile(args.file[0]):
|
||||
with open(args.file[0], 'r') as f:
|
||||
lines = f.readlines()
|
||||
|
||||
# Expecting exactly 2 lines
|
||||
if len(lines) < 2:
|
||||
printer.error("The file must contain at least two lines: one for nodes, one for hosts.")
|
||||
sys.exit(11)
|
||||
|
||||
nodes = lines[0].strip()
|
||||
hosts = lines[1].strip()
|
||||
newnodes = self.forms.questions_bulk(nodes, hosts)
|
||||
else:
|
||||
newnodes = self.forms.questions_bulk()
|
||||
|
||||
if newnodes == False:
|
||||
sys.exit(7)
|
||||
|
||||
if not self.app.case:
|
||||
newnodes["location"] = newnodes["location"].lower()
|
||||
newnodes["ids"] = newnodes["ids"].lower()
|
||||
|
||||
# Handle the case where location might be a file reference (e.g. from a prompt)
|
||||
location = newnodes["location"]
|
||||
if location.startswith("@") and "/" in location:
|
||||
# Extract the actual @folder part (e.g. @testall from @testall/.folders_cache.txt)
|
||||
location = location.split("/")[0]
|
||||
newnodes["location"] = location
|
||||
|
||||
ids = newnodes["ids"].split(",")
|
||||
# Append location to each id for proper folder assignment
|
||||
location = newnodes["location"]
|
||||
if location:
|
||||
ids = [f"{i}{location}" for i in ids]
|
||||
|
||||
hosts = newnodes["host"].split(",")
|
||||
|
||||
try:
|
||||
count = self.app.services.nodes.bulk_add(ids, hosts, newnodes)
|
||||
if count > 0:
|
||||
printer.success(f"Successfully added {count} nodes.")
|
||||
else:
|
||||
printer.info("0 nodes added")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
<h3>Methods</h3>
|
||||
<dl>
|
||||
<dt id="connpy.cli.import_export_handler.ImportExportHandler.bulk"><code class="name flex">
|
||||
<span>def <span class="ident">bulk</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def bulk(self, args):
|
||||
if args.file and os.path.isfile(args.file[0]):
|
||||
with open(args.file[0], 'r') as f:
|
||||
lines = f.readlines()
|
||||
|
||||
# Expecting exactly 2 lines
|
||||
if len(lines) < 2:
|
||||
printer.error("The file must contain at least two lines: one for nodes, one for hosts.")
|
||||
sys.exit(11)
|
||||
|
||||
nodes = lines[0].strip()
|
||||
hosts = lines[1].strip()
|
||||
newnodes = self.forms.questions_bulk(nodes, hosts)
|
||||
else:
|
||||
newnodes = self.forms.questions_bulk()
|
||||
|
||||
if newnodes == False:
|
||||
sys.exit(7)
|
||||
|
||||
if not self.app.case:
|
||||
newnodes["location"] = newnodes["location"].lower()
|
||||
newnodes["ids"] = newnodes["ids"].lower()
|
||||
|
||||
# Handle the case where location might be a file reference (e.g. from a prompt)
|
||||
location = newnodes["location"]
|
||||
if location.startswith("@") and "/" in location:
|
||||
# Extract the actual @folder part (e.g. @testall from @testall/.folders_cache.txt)
|
||||
location = location.split("/")[0]
|
||||
newnodes["location"] = location
|
||||
|
||||
ids = newnodes["ids"].split(",")
|
||||
# Append location to each id for proper folder assignment
|
||||
location = newnodes["location"]
|
||||
if location:
|
||||
ids = [f"{i}{location}" for i in ids]
|
||||
|
||||
hosts = newnodes["host"].split(",")
|
||||
|
||||
try:
|
||||
count = self.app.services.nodes.bulk_add(ids, hosts, newnodes)
|
||||
if count > 0:
|
||||
printer.success(f"Successfully added {count} nodes.")
|
||||
else:
|
||||
printer.info("0 nodes added")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.import_export_handler.ImportExportHandler.dispatch_export"><code class="name flex">
|
||||
<span>def <span class="ident">dispatch_export</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def dispatch_export(self, args):
|
||||
file_path = args.data[0]
|
||||
folders = args.data[1:] if len(args.data) > 1 else None
|
||||
try:
|
||||
self.app.services.import_export.export_to_file(file_path, folders=folders)
|
||||
printer.success(f"File {file_path} generated successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
sys.exit()</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.import_export_handler.ImportExportHandler.dispatch_import"><code class="name flex">
|
||||
<span>def <span class="ident">dispatch_import</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def dispatch_import(self, args):
|
||||
file_path = args.data[0]
|
||||
try:
|
||||
printer.warning("This could overwrite your current configuration!")
|
||||
question = [inquirer.Confirm("import", message=f"Are you sure you want to import {file_path}?")]
|
||||
confirm = inquirer.prompt(question)
|
||||
if confirm == None or not confirm["import"]:
|
||||
sys.exit(7)
|
||||
|
||||
self.app.services.import_export.import_from_file(file_path)
|
||||
printer.success(f"File {file_path} imported successfully.")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.cli" href="index.html">connpy.cli</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-classes">Classes</a></h3>
|
||||
<ul>
|
||||
<li>
|
||||
<h4><code><a title="connpy.cli.import_export_handler.ImportExportHandler" href="#connpy.cli.import_export_handler.ImportExportHandler">ImportExportHandler</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.cli.import_export_handler.ImportExportHandler.bulk" href="#connpy.cli.import_export_handler.ImportExportHandler.bulk">bulk</a></code></li>
|
||||
<li><code><a title="connpy.cli.import_export_handler.ImportExportHandler.dispatch_export" href="#connpy.cli.import_export_handler.ImportExportHandler.dispatch_export">dispatch_export</a></code></li>
|
||||
<li><code><a title="connpy.cli.import_export_handler.ImportExportHandler.dispatch_import" href="#connpy.cli.import_export_handler.ImportExportHandler.dispatch_import">dispatch_import</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.6</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,143 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.6">
|
||||
<title>connpy.cli API documentation</title>
|
||||
<meta name="description" content="">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.cli</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-submodules">Sub-modules</h2>
|
||||
<dl>
|
||||
<dt><code class="name"><a title="connpy.cli.ai_handler" href="ai_handler.html">connpy.cli.ai_handler</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt><code class="name"><a title="connpy.cli.api_handler" href="api_handler.html">connpy.cli.api_handler</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt><code class="name"><a title="connpy.cli.config_handler" href="config_handler.html">connpy.cli.config_handler</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt><code class="name"><a title="connpy.cli.context_handler" href="context_handler.html">connpy.cli.context_handler</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt><code class="name"><a title="connpy.cli.forms" href="forms.html">connpy.cli.forms</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt><code class="name"><a title="connpy.cli.help_text" href="help_text.html">connpy.cli.help_text</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt><code class="name"><a title="connpy.cli.helpers" href="helpers.html">connpy.cli.helpers</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt><code class="name"><a title="connpy.cli.import_export_handler" href="import_export_handler.html">connpy.cli.import_export_handler</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt><code class="name"><a title="connpy.cli.node_handler" href="node_handler.html">connpy.cli.node_handler</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt><code class="name"><a title="connpy.cli.plugin_handler" href="plugin_handler.html">connpy.cli.plugin_handler</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt><code class="name"><a title="connpy.cli.profile_handler" href="profile_handler.html">connpy.cli.profile_handler</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt><code class="name"><a title="connpy.cli.run_handler" href="run_handler.html">connpy.cli.run_handler</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt><code class="name"><a title="connpy.cli.sync_handler" href="sync_handler.html">connpy.cli.sync_handler</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt><code class="name"><a title="connpy.cli.validators" href="validators.html">connpy.cli.validators</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy" href="../index.html">connpy</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-submodules">Sub-modules</a></h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.cli.ai_handler" href="ai_handler.html">connpy.cli.ai_handler</a></code></li>
|
||||
<li><code><a title="connpy.cli.api_handler" href="api_handler.html">connpy.cli.api_handler</a></code></li>
|
||||
<li><code><a title="connpy.cli.config_handler" href="config_handler.html">connpy.cli.config_handler</a></code></li>
|
||||
<li><code><a title="connpy.cli.context_handler" href="context_handler.html">connpy.cli.context_handler</a></code></li>
|
||||
<li><code><a title="connpy.cli.forms" href="forms.html">connpy.cli.forms</a></code></li>
|
||||
<li><code><a title="connpy.cli.help_text" href="help_text.html">connpy.cli.help_text</a></code></li>
|
||||
<li><code><a title="connpy.cli.helpers" href="helpers.html">connpy.cli.helpers</a></code></li>
|
||||
<li><code><a title="connpy.cli.import_export_handler" href="import_export_handler.html">connpy.cli.import_export_handler</a></code></li>
|
||||
<li><code><a title="connpy.cli.node_handler" href="node_handler.html">connpy.cli.node_handler</a></code></li>
|
||||
<li><code><a title="connpy.cli.plugin_handler" href="plugin_handler.html">connpy.cli.plugin_handler</a></code></li>
|
||||
<li><code><a title="connpy.cli.profile_handler" href="profile_handler.html">connpy.cli.profile_handler</a></code></li>
|
||||
<li><code><a title="connpy.cli.run_handler" href="run_handler.html">connpy.cli.run_handler</a></code></li>
|
||||
<li><code><a title="connpy.cli.sync_handler" href="sync_handler.html">connpy.cli.sync_handler</a></code></li>
|
||||
<li><code><a title="connpy.cli.validators" href="validators.html">connpy.cli.validators</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.6</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,612 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.6">
|
||||
<title>connpy.cli.node_handler API documentation</title>
|
||||
<meta name="description" content="">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.cli.node_handler</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-classes">Classes</h2>
|
||||
<dl>
|
||||
<dt id="connpy.cli.node_handler.NodeHandler"><code class="flex name class">
|
||||
<span>class <span class="ident">NodeHandler</span></span>
|
||||
<span>(</span><span>app)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">class NodeHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
self.forms = Forms(app)
|
||||
|
||||
def dispatch(self, args):
|
||||
if not self.app.case and args.data != None:
|
||||
args.data = args.data.lower()
|
||||
actions = {"version": self.version, "connect": self.connect, "add": self.add, "del": self.delete, "mod": self.modify, "show": self.show}
|
||||
return actions.get(args.action)(args)
|
||||
|
||||
def version(self, args):
|
||||
from .._version import __version__
|
||||
printer.info(f"Connpy {__version__}")
|
||||
|
||||
def connect(self, args):
|
||||
if args.data == None:
|
||||
try:
|
||||
matches = self.app.services.nodes.list_nodes()
|
||||
except Exception as e:
|
||||
printer.error(f"Failed to list nodes: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
if len(matches) == 0:
|
||||
printer.warning("There are no nodes created")
|
||||
printer.info("try: connpy --help")
|
||||
sys.exit(9)
|
||||
else:
|
||||
try:
|
||||
matches = self.app.services.nodes.list_nodes(args.data)
|
||||
except Exception:
|
||||
matches = []
|
||||
|
||||
if len(matches) == 0:
|
||||
printer.error(f"{args.data} not found")
|
||||
sys.exit(2)
|
||||
elif len(matches) > 1:
|
||||
matches[0] = choose(self.app, matches, "node", "connect")
|
||||
|
||||
if matches[0] == None:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
self.app.services.nodes.connect_node(
|
||||
matches[0],
|
||||
sftp=args.sftp,
|
||||
debug=args.debug,
|
||||
logger=self.app._service_logger
|
||||
)
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def delete(self, args):
|
||||
if args.data == None:
|
||||
printer.error("Missing argument node")
|
||||
sys.exit(3)
|
||||
|
||||
is_folder = args.data.startswith("@")
|
||||
try:
|
||||
if is_folder:
|
||||
matches = self.app.services.nodes.list_folders(args.data)
|
||||
else:
|
||||
matches = self.app.services.nodes.list_nodes(args.data)
|
||||
except Exception:
|
||||
matches = []
|
||||
|
||||
if len(matches) == 0:
|
||||
printer.error(f"{args.data} not found")
|
||||
sys.exit(2)
|
||||
|
||||
printer.info(f"Removing: {matches}")
|
||||
question = [inquirer.Confirm("delete", message="Are you sure you want to continue?")]
|
||||
confirm = inquirer.prompt(question)
|
||||
if confirm == None or not confirm["delete"]:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
for item in matches:
|
||||
self.app.services.nodes.delete_node(item, is_folder=is_folder)
|
||||
|
||||
if len(matches) == 1:
|
||||
printer.success(f"{matches[0]} deleted successfully")
|
||||
else:
|
||||
printer.success(f"{len(matches)} items deleted successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def add(self, args):
|
||||
try:
|
||||
args.data = self.app._type_node(args.data)
|
||||
except ValueError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(3)
|
||||
|
||||
if args.data == None:
|
||||
printer.error("Missing argument node")
|
||||
sys.exit(3)
|
||||
|
||||
is_folder = args.data.startswith("@")
|
||||
try:
|
||||
if is_folder:
|
||||
uniques = self.app.services.nodes.explode_unique(args.data)
|
||||
if not uniques:
|
||||
raise InvalidConfigurationError(f"Invalid folder {args.data}")
|
||||
self.app.services.nodes.add_node(args.data, {}, is_folder=True)
|
||||
printer.success(f"{args.data} added successfully")
|
||||
else:
|
||||
if args.data in self.app.nodes_list:
|
||||
printer.error(f"Node '{args.data}' already exists.")
|
||||
sys.exit(1)
|
||||
uniques = self.app.services.nodes.explode_unique(args.data)
|
||||
|
||||
# Fast fail if parent folder does not exist
|
||||
self.app.services.nodes.validate_parent_folder(args.data)
|
||||
|
||||
printer.console.print(Markdown(get_instructions()))
|
||||
|
||||
new_node_data = self.forms.questions_nodes(args.data, uniques)
|
||||
if not new_node_data:
|
||||
sys.exit(7)
|
||||
self.app.services.nodes.add_node(args.data, new_node_data)
|
||||
printer.success(f"{args.data} added successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def show(self, args):
|
||||
if args.data == None:
|
||||
printer.error("Missing argument node")
|
||||
sys.exit(3)
|
||||
|
||||
try:
|
||||
matches = self.app.services.nodes.list_nodes(args.data)
|
||||
except Exception:
|
||||
matches = []
|
||||
|
||||
if len(matches) == 0:
|
||||
printer.error(f"{args.data} not found")
|
||||
sys.exit(2)
|
||||
elif len(matches) > 1:
|
||||
matches[0] = choose(self.app, matches, "node", "show")
|
||||
|
||||
if matches[0] == None:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
node = self.app.services.nodes.get_node_details(matches[0])
|
||||
yaml_output = yaml.dump(node, sort_keys=False, default_flow_style=False)
|
||||
printer.data(matches[0], yaml_output)
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def modify(self, args):
|
||||
if args.data == None:
|
||||
printer.error("Missing argument node")
|
||||
sys.exit(3)
|
||||
|
||||
try:
|
||||
matches = self.app.services.nodes.list_nodes(args.data)
|
||||
except Exception:
|
||||
matches = []
|
||||
|
||||
if len(matches) == 0:
|
||||
printer.error(f"No connection found with filter: {args.data}")
|
||||
sys.exit(2)
|
||||
|
||||
unique = matches[0] if len(matches) == 1 else None
|
||||
uniques = self.app.services.nodes.explode_unique(unique) if unique else {"id": None, "folder": None}
|
||||
|
||||
printer.info(f"Editing: {matches}")
|
||||
node_details = {}
|
||||
for i in matches:
|
||||
node_details[i] = self.app.services.nodes.get_node_details(i)
|
||||
|
||||
edits = self.forms.questions_edit()
|
||||
if edits == None:
|
||||
sys.exit(7)
|
||||
|
||||
# Use first match as base for defaults if multiple matches exist
|
||||
base_unique = matches[0]
|
||||
base_uniques = self.app.services.nodes.explode_unique(base_unique)
|
||||
updatenode = self.forms.questions_nodes(base_unique, base_uniques, edit=edits)
|
||||
if not updatenode:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
if len(matches) == 1:
|
||||
# Comparison for "Nothing to do"
|
||||
current = node_details[matches[0]].copy()
|
||||
current.update(uniques)
|
||||
current["type"] = "connection"
|
||||
if sorted(updatenode.items()) == sorted(current.items()):
|
||||
printer.info("Nothing to do here")
|
||||
return
|
||||
self.app.services.nodes.update_node(matches[0], updatenode)
|
||||
printer.success(f"{args.data} edited successfully")
|
||||
else:
|
||||
editcount = 0
|
||||
for k in matches:
|
||||
updated_item = self.app.services.nodes.explode_unique(k)
|
||||
updated_item["type"] = "connection"
|
||||
updated_item.update(node_details[k])
|
||||
|
||||
this_item_changed = False
|
||||
for key, should_edit in edits.items():
|
||||
if should_edit:
|
||||
this_item_changed = True
|
||||
updated_item[key] = updatenode[key]
|
||||
|
||||
if this_item_changed:
|
||||
editcount += 1
|
||||
self.app.services.nodes.update_node(k, updated_item)
|
||||
|
||||
if editcount == 0:
|
||||
printer.info("Nothing to do here")
|
||||
else:
|
||||
printer.success(f"{matches} edited successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
<h3>Methods</h3>
|
||||
<dl>
|
||||
<dt id="connpy.cli.node_handler.NodeHandler.add"><code class="name flex">
|
||||
<span>def <span class="ident">add</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def add(self, args):
|
||||
try:
|
||||
args.data = self.app._type_node(args.data)
|
||||
except ValueError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(3)
|
||||
|
||||
if args.data == None:
|
||||
printer.error("Missing argument node")
|
||||
sys.exit(3)
|
||||
|
||||
is_folder = args.data.startswith("@")
|
||||
try:
|
||||
if is_folder:
|
||||
uniques = self.app.services.nodes.explode_unique(args.data)
|
||||
if not uniques:
|
||||
raise InvalidConfigurationError(f"Invalid folder {args.data}")
|
||||
self.app.services.nodes.add_node(args.data, {}, is_folder=True)
|
||||
printer.success(f"{args.data} added successfully")
|
||||
else:
|
||||
if args.data in self.app.nodes_list:
|
||||
printer.error(f"Node '{args.data}' already exists.")
|
||||
sys.exit(1)
|
||||
uniques = self.app.services.nodes.explode_unique(args.data)
|
||||
|
||||
# Fast fail if parent folder does not exist
|
||||
self.app.services.nodes.validate_parent_folder(args.data)
|
||||
|
||||
printer.console.print(Markdown(get_instructions()))
|
||||
|
||||
new_node_data = self.forms.questions_nodes(args.data, uniques)
|
||||
if not new_node_data:
|
||||
sys.exit(7)
|
||||
self.app.services.nodes.add_node(args.data, new_node_data)
|
||||
printer.success(f"{args.data} added successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.node_handler.NodeHandler.connect"><code class="name flex">
|
||||
<span>def <span class="ident">connect</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def connect(self, args):
|
||||
if args.data == None:
|
||||
try:
|
||||
matches = self.app.services.nodes.list_nodes()
|
||||
except Exception as e:
|
||||
printer.error(f"Failed to list nodes: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
if len(matches) == 0:
|
||||
printer.warning("There are no nodes created")
|
||||
printer.info("try: connpy --help")
|
||||
sys.exit(9)
|
||||
else:
|
||||
try:
|
||||
matches = self.app.services.nodes.list_nodes(args.data)
|
||||
except Exception:
|
||||
matches = []
|
||||
|
||||
if len(matches) == 0:
|
||||
printer.error(f"{args.data} not found")
|
||||
sys.exit(2)
|
||||
elif len(matches) > 1:
|
||||
matches[0] = choose(self.app, matches, "node", "connect")
|
||||
|
||||
if matches[0] == None:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
self.app.services.nodes.connect_node(
|
||||
matches[0],
|
||||
sftp=args.sftp,
|
||||
debug=args.debug,
|
||||
logger=self.app._service_logger
|
||||
)
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.node_handler.NodeHandler.delete"><code class="name flex">
|
||||
<span>def <span class="ident">delete</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def delete(self, args):
|
||||
if args.data == None:
|
||||
printer.error("Missing argument node")
|
||||
sys.exit(3)
|
||||
|
||||
is_folder = args.data.startswith("@")
|
||||
try:
|
||||
if is_folder:
|
||||
matches = self.app.services.nodes.list_folders(args.data)
|
||||
else:
|
||||
matches = self.app.services.nodes.list_nodes(args.data)
|
||||
except Exception:
|
||||
matches = []
|
||||
|
||||
if len(matches) == 0:
|
||||
printer.error(f"{args.data} not found")
|
||||
sys.exit(2)
|
||||
|
||||
printer.info(f"Removing: {matches}")
|
||||
question = [inquirer.Confirm("delete", message="Are you sure you want to continue?")]
|
||||
confirm = inquirer.prompt(question)
|
||||
if confirm == None or not confirm["delete"]:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
for item in matches:
|
||||
self.app.services.nodes.delete_node(item, is_folder=is_folder)
|
||||
|
||||
if len(matches) == 1:
|
||||
printer.success(f"{matches[0]} deleted successfully")
|
||||
else:
|
||||
printer.success(f"{len(matches)} items deleted successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.node_handler.NodeHandler.dispatch"><code class="name flex">
|
||||
<span>def <span class="ident">dispatch</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def dispatch(self, args):
|
||||
if not self.app.case and args.data != None:
|
||||
args.data = args.data.lower()
|
||||
actions = {"version": self.version, "connect": self.connect, "add": self.add, "del": self.delete, "mod": self.modify, "show": self.show}
|
||||
return actions.get(args.action)(args)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.node_handler.NodeHandler.modify"><code class="name flex">
|
||||
<span>def <span class="ident">modify</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def modify(self, args):
|
||||
if args.data == None:
|
||||
printer.error("Missing argument node")
|
||||
sys.exit(3)
|
||||
|
||||
try:
|
||||
matches = self.app.services.nodes.list_nodes(args.data)
|
||||
except Exception:
|
||||
matches = []
|
||||
|
||||
if len(matches) == 0:
|
||||
printer.error(f"No connection found with filter: {args.data}")
|
||||
sys.exit(2)
|
||||
|
||||
unique = matches[0] if len(matches) == 1 else None
|
||||
uniques = self.app.services.nodes.explode_unique(unique) if unique else {"id": None, "folder": None}
|
||||
|
||||
printer.info(f"Editing: {matches}")
|
||||
node_details = {}
|
||||
for i in matches:
|
||||
node_details[i] = self.app.services.nodes.get_node_details(i)
|
||||
|
||||
edits = self.forms.questions_edit()
|
||||
if edits == None:
|
||||
sys.exit(7)
|
||||
|
||||
# Use first match as base for defaults if multiple matches exist
|
||||
base_unique = matches[0]
|
||||
base_uniques = self.app.services.nodes.explode_unique(base_unique)
|
||||
updatenode = self.forms.questions_nodes(base_unique, base_uniques, edit=edits)
|
||||
if not updatenode:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
if len(matches) == 1:
|
||||
# Comparison for "Nothing to do"
|
||||
current = node_details[matches[0]].copy()
|
||||
current.update(uniques)
|
||||
current["type"] = "connection"
|
||||
if sorted(updatenode.items()) == sorted(current.items()):
|
||||
printer.info("Nothing to do here")
|
||||
return
|
||||
self.app.services.nodes.update_node(matches[0], updatenode)
|
||||
printer.success(f"{args.data} edited successfully")
|
||||
else:
|
||||
editcount = 0
|
||||
for k in matches:
|
||||
updated_item = self.app.services.nodes.explode_unique(k)
|
||||
updated_item["type"] = "connection"
|
||||
updated_item.update(node_details[k])
|
||||
|
||||
this_item_changed = False
|
||||
for key, should_edit in edits.items():
|
||||
if should_edit:
|
||||
this_item_changed = True
|
||||
updated_item[key] = updatenode[key]
|
||||
|
||||
if this_item_changed:
|
||||
editcount += 1
|
||||
self.app.services.nodes.update_node(k, updated_item)
|
||||
|
||||
if editcount == 0:
|
||||
printer.info("Nothing to do here")
|
||||
else:
|
||||
printer.success(f"{matches} edited successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.node_handler.NodeHandler.show"><code class="name flex">
|
||||
<span>def <span class="ident">show</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def show(self, args):
|
||||
if args.data == None:
|
||||
printer.error("Missing argument node")
|
||||
sys.exit(3)
|
||||
|
||||
try:
|
||||
matches = self.app.services.nodes.list_nodes(args.data)
|
||||
except Exception:
|
||||
matches = []
|
||||
|
||||
if len(matches) == 0:
|
||||
printer.error(f"{args.data} not found")
|
||||
sys.exit(2)
|
||||
elif len(matches) > 1:
|
||||
matches[0] = choose(self.app, matches, "node", "show")
|
||||
|
||||
if matches[0] == None:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
node = self.app.services.nodes.get_node_details(matches[0])
|
||||
yaml_output = yaml.dump(node, sort_keys=False, default_flow_style=False)
|
||||
printer.data(matches[0], yaml_output)
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.node_handler.NodeHandler.version"><code class="name flex">
|
||||
<span>def <span class="ident">version</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def version(self, args):
|
||||
from .._version import __version__
|
||||
printer.info(f"Connpy {__version__}")</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.cli" href="index.html">connpy.cli</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-classes">Classes</a></h3>
|
||||
<ul>
|
||||
<li>
|
||||
<h4><code><a title="connpy.cli.node_handler.NodeHandler" href="#connpy.cli.node_handler.NodeHandler">NodeHandler</a></code></h4>
|
||||
<ul class="two-column">
|
||||
<li><code><a title="connpy.cli.node_handler.NodeHandler.add" href="#connpy.cli.node_handler.NodeHandler.add">add</a></code></li>
|
||||
<li><code><a title="connpy.cli.node_handler.NodeHandler.connect" href="#connpy.cli.node_handler.NodeHandler.connect">connect</a></code></li>
|
||||
<li><code><a title="connpy.cli.node_handler.NodeHandler.delete" href="#connpy.cli.node_handler.NodeHandler.delete">delete</a></code></li>
|
||||
<li><code><a title="connpy.cli.node_handler.NodeHandler.dispatch" href="#connpy.cli.node_handler.NodeHandler.dispatch">dispatch</a></code></li>
|
||||
<li><code><a title="connpy.cli.node_handler.NodeHandler.modify" href="#connpy.cli.node_handler.NodeHandler.modify">modify</a></code></li>
|
||||
<li><code><a title="connpy.cli.node_handler.NodeHandler.show" href="#connpy.cli.node_handler.NodeHandler.show">show</a></code></li>
|
||||
<li><code><a title="connpy.cli.node_handler.NodeHandler.version" href="#connpy.cli.node_handler.NodeHandler.version">version</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.6</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,391 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.6">
|
||||
<title>connpy.cli.plugin_handler API documentation</title>
|
||||
<meta name="description" content="">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.cli.plugin_handler</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-classes">Classes</h2>
|
||||
<dl>
|
||||
<dt id="connpy.cli.plugin_handler.PluginHandler"><code class="flex name class">
|
||||
<span>class <span class="ident">PluginHandler</span></span>
|
||||
<span>(</span><span>app)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">class PluginHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
|
||||
def dispatch(self, args):
|
||||
try:
|
||||
# We determine the target PluginService/PluginStub based on standard 'mode'
|
||||
# But wait, local plugins should go to app.services._init_local version
|
||||
# Or we can just use the provided app.services.plugins and pass the appropriate grpc calls if needed.
|
||||
|
||||
is_remote = getattr(args, "remote", False)
|
||||
if is_remote and self.app.services.mode != "remote":
|
||||
printer.error("Cannot use --remote flag when not running in remote mode.")
|
||||
return
|
||||
|
||||
if args.add:
|
||||
self.app.services.plugins.add_plugin(args.add[0], args.add[1])
|
||||
printer.success(f"Plugin {args.add[0]} added successfully{' remotely' if is_remote else ''}.")
|
||||
elif args.update:
|
||||
self.app.services.plugins.add_plugin(args.update[0], args.update[1], update=True)
|
||||
printer.success(f"Plugin {args.update[0]} updated successfully{' remotely' if is_remote else ''}.")
|
||||
elif args.delete:
|
||||
self.app.services.plugins.delete_plugin(args.delete[0])
|
||||
printer.success(f"Plugin {args.delete[0]} deleted successfully{' remotely' if is_remote else ''}.")
|
||||
elif args.enable:
|
||||
name = args.enable[0]
|
||||
if is_remote:
|
||||
self.app.plugins.preferences[name] = "remote"
|
||||
else:
|
||||
if name in self.app.plugins.preferences:
|
||||
del self.app.plugins.preferences[name]
|
||||
|
||||
self.app.plugins._save_preferences(self.app.services.config_svc.get_default_dir())
|
||||
|
||||
# Always try to enable it locally (remove .bkp) if it exists
|
||||
# regardless of mode, to keep files consistent with "enabled" state
|
||||
try:
|
||||
# We use a local service instance to ensure we touch local files
|
||||
from ..services.plugin_service import PluginService
|
||||
local_svc = PluginService(self.app.services.config)
|
||||
local_svc.enable_plugin(name)
|
||||
except Exception:
|
||||
pass # Ignore if not found locally or already enabled
|
||||
|
||||
if is_remote and self.app.services.mode == "remote":
|
||||
self.app.services.plugins.enable_plugin(name)
|
||||
|
||||
printer.success(f"Plugin {name} enabled successfully{' remotely' if is_remote else ' locally'}.")
|
||||
elif args.disable:
|
||||
name = args.disable[0]
|
||||
success = False
|
||||
if is_remote:
|
||||
if self.app.services.mode == "remote":
|
||||
self.app.services.plugins.disable_plugin(name)
|
||||
success = True
|
||||
else:
|
||||
# Disable locally
|
||||
from ..services.plugin_service import PluginService
|
||||
local_svc = PluginService(self.app.services.config)
|
||||
try:
|
||||
if local_svc.disable_plugin(name):
|
||||
success = True
|
||||
except Exception as e:
|
||||
printer.warning(f"Could not disable local plugin: {e}")
|
||||
|
||||
if success:
|
||||
printer.success(f"Plugin {name} disabled successfully{' remotely' if is_remote else ' locally'}.")
|
||||
|
||||
# If any remote operation was performed, trigger a sync to update local cache immediately
|
||||
if is_remote and self.app.services.mode == "remote":
|
||||
try:
|
||||
import os
|
||||
cache_dir = os.path.join(self.app.services.config_svc.get_default_dir(), "remote_plugins")
|
||||
# We use a dummy subparser choice check bypass by passing force_sync=True
|
||||
# or just letting the hasher handle it.
|
||||
self.app.plugins._import_remote_plugins_to_argparse(
|
||||
self.app.services.plugins,
|
||||
self.app.subparsers, # We'll need to make sure this is available
|
||||
cache_dir,
|
||||
force_sync=True
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
elif getattr(args, "sync", False):
|
||||
# The actual sync logic is performed in connapp.py during init
|
||||
# if the --sync flag is detected in sys.argv
|
||||
printer.success("Remote plugins synchronized successfully.")
|
||||
elif args.list:
|
||||
# We need to fetch both local and remote if in remote mode
|
||||
local_plugins = {}
|
||||
remote_plugins = {}
|
||||
|
||||
# Fetch depending on mode
|
||||
if self.app.services.mode == "remote":
|
||||
# For local we need to instantiate a local plugin service bypassing stub
|
||||
from ..services.plugin_service import PluginService
|
||||
local_svc = PluginService(self.app.services.config)
|
||||
local_plugins = local_svc.list_plugins()
|
||||
remote_plugins = self.app.services.plugins.list_plugins()
|
||||
else:
|
||||
local_plugins = self.app.services.plugins.list_plugins()
|
||||
|
||||
from rich.table import Table
|
||||
|
||||
table = Table(title="Available Plugins", show_header=True, header_style="bold cyan")
|
||||
table.add_column("Plugin", style="cyan")
|
||||
table.add_column("State", style="bold")
|
||||
table.add_column("Origin", style="magenta")
|
||||
|
||||
# Populate local plugins
|
||||
for name, details in local_plugins.items():
|
||||
state = "Disabled" if not details.get("enabled", True) else "Active"
|
||||
color = "red" if state == "Disabled" else "green"
|
||||
|
||||
if self.app.services.mode == "remote" and state == "Active":
|
||||
if self.app.plugins.preferences.get(name) == "remote":
|
||||
state = "Shadowed (Override by Remote)"
|
||||
color = "yellow"
|
||||
|
||||
table.add_row(name, f"[{color}]{state}[/{color}]", "Local")
|
||||
|
||||
# Populate remote plugins
|
||||
if self.app.services.mode == "remote":
|
||||
for name, details in remote_plugins.items():
|
||||
state = "Disabled" if not details.get("enabled", True) else "Active"
|
||||
color = "red" if state == "Disabled" else "green"
|
||||
|
||||
if state == "Active":
|
||||
pref = self.app.plugins.preferences.get(name, "local")
|
||||
# If preference isn't remote and the plugin exists locally, local takes priority
|
||||
if pref != "remote" and name in local_plugins:
|
||||
state = "Shadowed (Override by Local)"
|
||||
color = "yellow"
|
||||
|
||||
table.add_row(name, f"[{color}]{state}[/{color}]", "Remote")
|
||||
|
||||
if not local_plugins and not remote_plugins:
|
||||
printer.console.print(" No plugins found.")
|
||||
else:
|
||||
printer.console.print(table)
|
||||
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
<h3>Methods</h3>
|
||||
<dl>
|
||||
<dt id="connpy.cli.plugin_handler.PluginHandler.dispatch"><code class="name flex">
|
||||
<span>def <span class="ident">dispatch</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def dispatch(self, args):
|
||||
try:
|
||||
# We determine the target PluginService/PluginStub based on standard 'mode'
|
||||
# But wait, local plugins should go to app.services._init_local version
|
||||
# Or we can just use the provided app.services.plugins and pass the appropriate grpc calls if needed.
|
||||
|
||||
is_remote = getattr(args, "remote", False)
|
||||
if is_remote and self.app.services.mode != "remote":
|
||||
printer.error("Cannot use --remote flag when not running in remote mode.")
|
||||
return
|
||||
|
||||
if args.add:
|
||||
self.app.services.plugins.add_plugin(args.add[0], args.add[1])
|
||||
printer.success(f"Plugin {args.add[0]} added successfully{' remotely' if is_remote else ''}.")
|
||||
elif args.update:
|
||||
self.app.services.plugins.add_plugin(args.update[0], args.update[1], update=True)
|
||||
printer.success(f"Plugin {args.update[0]} updated successfully{' remotely' if is_remote else ''}.")
|
||||
elif args.delete:
|
||||
self.app.services.plugins.delete_plugin(args.delete[0])
|
||||
printer.success(f"Plugin {args.delete[0]} deleted successfully{' remotely' if is_remote else ''}.")
|
||||
elif args.enable:
|
||||
name = args.enable[0]
|
||||
if is_remote:
|
||||
self.app.plugins.preferences[name] = "remote"
|
||||
else:
|
||||
if name in self.app.plugins.preferences:
|
||||
del self.app.plugins.preferences[name]
|
||||
|
||||
self.app.plugins._save_preferences(self.app.services.config_svc.get_default_dir())
|
||||
|
||||
# Always try to enable it locally (remove .bkp) if it exists
|
||||
# regardless of mode, to keep files consistent with "enabled" state
|
||||
try:
|
||||
# We use a local service instance to ensure we touch local files
|
||||
from ..services.plugin_service import PluginService
|
||||
local_svc = PluginService(self.app.services.config)
|
||||
local_svc.enable_plugin(name)
|
||||
except Exception:
|
||||
pass # Ignore if not found locally or already enabled
|
||||
|
||||
if is_remote and self.app.services.mode == "remote":
|
||||
self.app.services.plugins.enable_plugin(name)
|
||||
|
||||
printer.success(f"Plugin {name} enabled successfully{' remotely' if is_remote else ' locally'}.")
|
||||
elif args.disable:
|
||||
name = args.disable[0]
|
||||
success = False
|
||||
if is_remote:
|
||||
if self.app.services.mode == "remote":
|
||||
self.app.services.plugins.disable_plugin(name)
|
||||
success = True
|
||||
else:
|
||||
# Disable locally
|
||||
from ..services.plugin_service import PluginService
|
||||
local_svc = PluginService(self.app.services.config)
|
||||
try:
|
||||
if local_svc.disable_plugin(name):
|
||||
success = True
|
||||
except Exception as e:
|
||||
printer.warning(f"Could not disable local plugin: {e}")
|
||||
|
||||
if success:
|
||||
printer.success(f"Plugin {name} disabled successfully{' remotely' if is_remote else ' locally'}.")
|
||||
|
||||
# If any remote operation was performed, trigger a sync to update local cache immediately
|
||||
if is_remote and self.app.services.mode == "remote":
|
||||
try:
|
||||
import os
|
||||
cache_dir = os.path.join(self.app.services.config_svc.get_default_dir(), "remote_plugins")
|
||||
# We use a dummy subparser choice check bypass by passing force_sync=True
|
||||
# or just letting the hasher handle it.
|
||||
self.app.plugins._import_remote_plugins_to_argparse(
|
||||
self.app.services.plugins,
|
||||
self.app.subparsers, # We'll need to make sure this is available
|
||||
cache_dir,
|
||||
force_sync=True
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
elif getattr(args, "sync", False):
|
||||
# The actual sync logic is performed in connapp.py during init
|
||||
# if the --sync flag is detected in sys.argv
|
||||
printer.success("Remote plugins synchronized successfully.")
|
||||
elif args.list:
|
||||
# We need to fetch both local and remote if in remote mode
|
||||
local_plugins = {}
|
||||
remote_plugins = {}
|
||||
|
||||
# Fetch depending on mode
|
||||
if self.app.services.mode == "remote":
|
||||
# For local we need to instantiate a local plugin service bypassing stub
|
||||
from ..services.plugin_service import PluginService
|
||||
local_svc = PluginService(self.app.services.config)
|
||||
local_plugins = local_svc.list_plugins()
|
||||
remote_plugins = self.app.services.plugins.list_plugins()
|
||||
else:
|
||||
local_plugins = self.app.services.plugins.list_plugins()
|
||||
|
||||
from rich.table import Table
|
||||
|
||||
table = Table(title="Available Plugins", show_header=True, header_style="bold cyan")
|
||||
table.add_column("Plugin", style="cyan")
|
||||
table.add_column("State", style="bold")
|
||||
table.add_column("Origin", style="magenta")
|
||||
|
||||
# Populate local plugins
|
||||
for name, details in local_plugins.items():
|
||||
state = "Disabled" if not details.get("enabled", True) else "Active"
|
||||
color = "red" if state == "Disabled" else "green"
|
||||
|
||||
if self.app.services.mode == "remote" and state == "Active":
|
||||
if self.app.plugins.preferences.get(name) == "remote":
|
||||
state = "Shadowed (Override by Remote)"
|
||||
color = "yellow"
|
||||
|
||||
table.add_row(name, f"[{color}]{state}[/{color}]", "Local")
|
||||
|
||||
# Populate remote plugins
|
||||
if self.app.services.mode == "remote":
|
||||
for name, details in remote_plugins.items():
|
||||
state = "Disabled" if not details.get("enabled", True) else "Active"
|
||||
color = "red" if state == "Disabled" else "green"
|
||||
|
||||
if state == "Active":
|
||||
pref = self.app.plugins.preferences.get(name, "local")
|
||||
# If preference isn't remote and the plugin exists locally, local takes priority
|
||||
if pref != "remote" and name in local_plugins:
|
||||
state = "Shadowed (Override by Local)"
|
||||
color = "yellow"
|
||||
|
||||
table.add_row(name, f"[{color}]{state}[/{color}]", "Remote")
|
||||
|
||||
if not local_plugins and not remote_plugins:
|
||||
printer.console.print(" No plugins found.")
|
||||
else:
|
||||
printer.console.print(table)
|
||||
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.cli" href="index.html">connpy.cli</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-classes">Classes</a></h3>
|
||||
<ul>
|
||||
<li>
|
||||
<h4><code><a title="connpy.cli.plugin_handler.PluginHandler" href="#connpy.cli.plugin_handler.PluginHandler">PluginHandler</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.cli.plugin_handler.PluginHandler.dispatch" href="#connpy.cli.plugin_handler.PluginHandler.dispatch">dispatch</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.6</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,320 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.6">
|
||||
<title>connpy.cli.profile_handler API documentation</title>
|
||||
<meta name="description" content="">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.cli.profile_handler</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-classes">Classes</h2>
|
||||
<dl>
|
||||
<dt id="connpy.cli.profile_handler.ProfileHandler"><code class="flex name class">
|
||||
<span>class <span class="ident">ProfileHandler</span></span>
|
||||
<span>(</span><span>app)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">class ProfileHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
self.forms = Forms(app)
|
||||
|
||||
def dispatch(self, args):
|
||||
if not self.app.case:
|
||||
args.data[0] = args.data[0].lower()
|
||||
actions = {"add": self.add, "del": self.delete, "mod": self.modify, "show": self.show}
|
||||
return actions.get(args.action)(args)
|
||||
|
||||
def delete(self, args):
|
||||
name = args.data[0]
|
||||
try:
|
||||
self.app.services.profiles.get_profile(name)
|
||||
except ProfileNotFoundError:
|
||||
printer.error(f"{name} not found")
|
||||
sys.exit(2)
|
||||
|
||||
if name == "default":
|
||||
printer.error("Can't delete default profile")
|
||||
sys.exit(6)
|
||||
|
||||
question = [inquirer.Confirm("delete", message=f"Are you sure you want to delete {name}?")]
|
||||
confirm = inquirer.prompt(question)
|
||||
if confirm == None or not confirm["delete"]:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
self.app.services.profiles.delete_profile(name)
|
||||
printer.success(f"{name} deleted successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(8)
|
||||
|
||||
def show(self, args):
|
||||
try:
|
||||
profile = self.app.services.profiles.get_profile(args.data[0])
|
||||
yaml_output = yaml.dump(profile, sort_keys=False, default_flow_style=False)
|
||||
printer.data(args.data[0], yaml_output)
|
||||
except ProfileNotFoundError:
|
||||
printer.error(f"{args.data[0]} not found")
|
||||
sys.exit(2)
|
||||
|
||||
def add(self, args):
|
||||
name = args.data[0]
|
||||
if name in self.app.services.profiles.list_profiles():
|
||||
printer.error(f"Profile '{name}' already exists.")
|
||||
sys.exit(4)
|
||||
|
||||
new_profile_data = self.forms.questions_profiles(name)
|
||||
if not new_profile_data:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
self.app.services.profiles.add_profile(name, new_profile_data)
|
||||
printer.success(f"{name} added successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def modify(self, args):
|
||||
name = args.data[0]
|
||||
try:
|
||||
profile = self.app.services.profiles.get_profile(name, resolve=False)
|
||||
except ProfileNotFoundError:
|
||||
printer.error(f"Profile '{name}' not found")
|
||||
sys.exit(2)
|
||||
|
||||
old_profile = {"id": name, **profile}
|
||||
edits = self.forms.questions_edit()
|
||||
if edits == None:
|
||||
sys.exit(7)
|
||||
|
||||
update_profile_data = self.forms.questions_profiles(name, edit=edits)
|
||||
if not update_profile_data:
|
||||
sys.exit(7)
|
||||
|
||||
if sorted(update_profile_data.items()) == sorted(old_profile.items()):
|
||||
printer.info("Nothing to do here")
|
||||
return
|
||||
|
||||
try:
|
||||
self.app.services.profiles.update_profile(name, update_profile_data)
|
||||
printer.success(f"{name} edited successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
<h3>Methods</h3>
|
||||
<dl>
|
||||
<dt id="connpy.cli.profile_handler.ProfileHandler.add"><code class="name flex">
|
||||
<span>def <span class="ident">add</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def add(self, args):
|
||||
name = args.data[0]
|
||||
if name in self.app.services.profiles.list_profiles():
|
||||
printer.error(f"Profile '{name}' already exists.")
|
||||
sys.exit(4)
|
||||
|
||||
new_profile_data = self.forms.questions_profiles(name)
|
||||
if not new_profile_data:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
self.app.services.profiles.add_profile(name, new_profile_data)
|
||||
printer.success(f"{name} added successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.profile_handler.ProfileHandler.delete"><code class="name flex">
|
||||
<span>def <span class="ident">delete</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def delete(self, args):
|
||||
name = args.data[0]
|
||||
try:
|
||||
self.app.services.profiles.get_profile(name)
|
||||
except ProfileNotFoundError:
|
||||
printer.error(f"{name} not found")
|
||||
sys.exit(2)
|
||||
|
||||
if name == "default":
|
||||
printer.error("Can't delete default profile")
|
||||
sys.exit(6)
|
||||
|
||||
question = [inquirer.Confirm("delete", message=f"Are you sure you want to delete {name}?")]
|
||||
confirm = inquirer.prompt(question)
|
||||
if confirm == None or not confirm["delete"]:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
self.app.services.profiles.delete_profile(name)
|
||||
printer.success(f"{name} deleted successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(8)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.profile_handler.ProfileHandler.dispatch"><code class="name flex">
|
||||
<span>def <span class="ident">dispatch</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def dispatch(self, args):
|
||||
if not self.app.case:
|
||||
args.data[0] = args.data[0].lower()
|
||||
actions = {"add": self.add, "del": self.delete, "mod": self.modify, "show": self.show}
|
||||
return actions.get(args.action)(args)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.profile_handler.ProfileHandler.modify"><code class="name flex">
|
||||
<span>def <span class="ident">modify</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def modify(self, args):
|
||||
name = args.data[0]
|
||||
try:
|
||||
profile = self.app.services.profiles.get_profile(name, resolve=False)
|
||||
except ProfileNotFoundError:
|
||||
printer.error(f"Profile '{name}' not found")
|
||||
sys.exit(2)
|
||||
|
||||
old_profile = {"id": name, **profile}
|
||||
edits = self.forms.questions_edit()
|
||||
if edits == None:
|
||||
sys.exit(7)
|
||||
|
||||
update_profile_data = self.forms.questions_profiles(name, edit=edits)
|
||||
if not update_profile_data:
|
||||
sys.exit(7)
|
||||
|
||||
if sorted(update_profile_data.items()) == sorted(old_profile.items()):
|
||||
printer.info("Nothing to do here")
|
||||
return
|
||||
|
||||
try:
|
||||
self.app.services.profiles.update_profile(name, update_profile_data)
|
||||
printer.success(f"{name} edited successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.profile_handler.ProfileHandler.show"><code class="name flex">
|
||||
<span>def <span class="ident">show</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def show(self, args):
|
||||
try:
|
||||
profile = self.app.services.profiles.get_profile(args.data[0])
|
||||
yaml_output = yaml.dump(profile, sort_keys=False, default_flow_style=False)
|
||||
printer.data(args.data[0], yaml_output)
|
||||
except ProfileNotFoundError:
|
||||
printer.error(f"{args.data[0]} not found")
|
||||
sys.exit(2)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.cli" href="index.html">connpy.cli</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-classes">Classes</a></h3>
|
||||
<ul>
|
||||
<li>
|
||||
<h4><code><a title="connpy.cli.profile_handler.ProfileHandler" href="#connpy.cli.profile_handler.ProfileHandler">ProfileHandler</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.cli.profile_handler.ProfileHandler.add" href="#connpy.cli.profile_handler.ProfileHandler.add">add</a></code></li>
|
||||
<li><code><a title="connpy.cli.profile_handler.ProfileHandler.delete" href="#connpy.cli.profile_handler.ProfileHandler.delete">delete</a></code></li>
|
||||
<li><code><a title="connpy.cli.profile_handler.ProfileHandler.dispatch" href="#connpy.cli.profile_handler.ProfileHandler.dispatch">dispatch</a></code></li>
|
||||
<li><code><a title="connpy.cli.profile_handler.ProfileHandler.modify" href="#connpy.cli.profile_handler.ProfileHandler.modify">modify</a></code></li>
|
||||
<li><code><a title="connpy.cli.profile_handler.ProfileHandler.show" href="#connpy.cli.profile_handler.ProfileHandler.show">show</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.6</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,460 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.6">
|
||||
<title>connpy.cli.run_handler API documentation</title>
|
||||
<meta name="description" content="">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.cli.run_handler</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-classes">Classes</h2>
|
||||
<dl>
|
||||
<dt id="connpy.cli.run_handler.RunHandler"><code class="flex name class">
|
||||
<span>class <span class="ident">RunHandler</span></span>
|
||||
<span>(</span><span>app)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">class RunHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
self.print_lock = threading.Lock()
|
||||
|
||||
def dispatch(self, args):
|
||||
if len(args.data) > 1:
|
||||
args.action = "noderun"
|
||||
actions = {"noderun": self.node_run, "generate": self.yaml_generate, "run": self.yaml_run}
|
||||
return actions.get(args.action)(args)
|
||||
|
||||
def node_run(self, args):
|
||||
nodes_filter = args.data[0]
|
||||
commands = [" ".join(args.data[1:])]
|
||||
|
||||
try:
|
||||
header_printed = False
|
||||
|
||||
if hasattr(args, 'test_expected') and args.test_expected:
|
||||
# Mode: Test
|
||||
def _on_node_complete(unique, node_output, node_status, node_result):
|
||||
nonlocal header_printed
|
||||
with self.print_lock:
|
||||
if not header_printed:
|
||||
printer.console.print(Rule("OUTPUT", style="header"))
|
||||
header_printed = True
|
||||
printer.test_panel(unique, node_output, node_status, node_result)
|
||||
|
||||
results = self.app.services.execution.test_commands(
|
||||
nodes_filter=nodes_filter,
|
||||
commands=commands,
|
||||
expected=args.test_expected,
|
||||
on_node_complete=_on_node_complete
|
||||
)
|
||||
printer.test_summary(results)
|
||||
else:
|
||||
# Mode: Normal Run
|
||||
def _on_node_complete(unique, node_output, node_status):
|
||||
nonlocal header_printed
|
||||
with self.print_lock:
|
||||
if not header_printed:
|
||||
printer.console.print(Rule("OUTPUT", style="header"))
|
||||
header_printed = True
|
||||
printer.node_panel(unique, node_output, node_status)
|
||||
|
||||
results = self.app.services.execution.run_commands(
|
||||
nodes_filter=nodes_filter,
|
||||
commands=commands,
|
||||
on_node_complete=_on_node_complete
|
||||
)
|
||||
printer.run_summary(results)
|
||||
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def yaml_generate(self, args):
|
||||
if os.path.exists(args.data[0]):
|
||||
printer.error(f"File '{args.data[0]}' already exists.")
|
||||
sys.exit(14)
|
||||
else:
|
||||
with open(args.data[0], "w") as file:
|
||||
file.write(get_instructions("generate"))
|
||||
printer.success(f"File {args.data[0]} generated successfully")
|
||||
sys.exit()
|
||||
|
||||
def yaml_run(self, args):
|
||||
path = args.data[0]
|
||||
try:
|
||||
with open(path, "r") as f:
|
||||
playbook = yaml.load(f, Loader=yaml.FullLoader)
|
||||
|
||||
for task in playbook.get("tasks", []):
|
||||
self.cli_run(task)
|
||||
|
||||
except Exception as e:
|
||||
printer.error(f"Failed to run playbook {path}: {e}")
|
||||
sys.exit(10)
|
||||
|
||||
def cli_run(self, script):
|
||||
name = script.get("name", "Task")
|
||||
try:
|
||||
action = script["action"]
|
||||
nodelist = script["nodes"]
|
||||
commands = script["commands"]
|
||||
variables = script.get("variables")
|
||||
output_cfg = script["output"]
|
||||
options = script.get("options", {})
|
||||
except KeyError as e:
|
||||
printer.error(f"[{name}] '{e.args[0]}' is mandatory in script")
|
||||
sys.exit(11)
|
||||
|
||||
stdout = (output_cfg == "stdout")
|
||||
folder = output_cfg if output_cfg not in [None, "stdout"] else None
|
||||
prompt = options.get("prompt")
|
||||
|
||||
try:
|
||||
header_printed = False
|
||||
if action == "run":
|
||||
# If stdout is true, we stream results as they arrive
|
||||
def _on_run_complete(unique, node_output, node_status):
|
||||
nonlocal header_printed
|
||||
if stdout:
|
||||
with self.print_lock:
|
||||
if not header_printed:
|
||||
printer.console.print(Rule(name.upper(), style="header"))
|
||||
header_printed = True
|
||||
printer.node_panel(unique, node_output, node_status)
|
||||
|
||||
results = self.app.services.execution.run_commands(
|
||||
nodes_filter=nodelist,
|
||||
commands=commands,
|
||||
variables=variables,
|
||||
parallel=options.get("parallel", 10),
|
||||
timeout=options.get("timeout", 10),
|
||||
folder=folder,
|
||||
prompt=prompt,
|
||||
on_node_complete=_on_run_complete
|
||||
)
|
||||
# Final Summary
|
||||
if not stdout and not folder:
|
||||
with self.print_lock:
|
||||
printer.console.print(Rule(name.upper(), style="header"))
|
||||
for unique, data in results.items():
|
||||
output = data["output"] if isinstance(data, dict) else data
|
||||
printer.node_panel(unique, output, 0)
|
||||
|
||||
# ALWAYS show the aggregate execution summary at the end
|
||||
printer.run_summary(results)
|
||||
|
||||
elif action == "test":
|
||||
expected = script.get("expected", [])
|
||||
# Show test_panel per node ONLY if stdout is True
|
||||
def _on_test_complete(unique, node_output, node_status, node_result):
|
||||
nonlocal header_printed
|
||||
if stdout:
|
||||
with self.print_lock:
|
||||
if not header_printed:
|
||||
printer.console.print(Rule(name.upper(), style="header"))
|
||||
header_printed = True
|
||||
printer.test_panel(unique, node_output, node_status, node_result)
|
||||
|
||||
results = self.app.services.execution.test_commands(
|
||||
nodes_filter=nodelist,
|
||||
commands=commands,
|
||||
expected=expected,
|
||||
variables=variables,
|
||||
parallel=options.get("parallel", 10),
|
||||
timeout=options.get("timeout", 10),
|
||||
folder=folder,
|
||||
prompt=prompt,
|
||||
on_node_complete=_on_test_complete
|
||||
)
|
||||
# ALWAYS show the aggregate summary at the end
|
||||
printer.test_summary(results)
|
||||
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
<h3>Methods</h3>
|
||||
<dl>
|
||||
<dt id="connpy.cli.run_handler.RunHandler.cli_run"><code class="name flex">
|
||||
<span>def <span class="ident">cli_run</span></span>(<span>self, script)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def cli_run(self, script):
|
||||
name = script.get("name", "Task")
|
||||
try:
|
||||
action = script["action"]
|
||||
nodelist = script["nodes"]
|
||||
commands = script["commands"]
|
||||
variables = script.get("variables")
|
||||
output_cfg = script["output"]
|
||||
options = script.get("options", {})
|
||||
except KeyError as e:
|
||||
printer.error(f"[{name}] '{e.args[0]}' is mandatory in script")
|
||||
sys.exit(11)
|
||||
|
||||
stdout = (output_cfg == "stdout")
|
||||
folder = output_cfg if output_cfg not in [None, "stdout"] else None
|
||||
prompt = options.get("prompt")
|
||||
|
||||
try:
|
||||
header_printed = False
|
||||
if action == "run":
|
||||
# If stdout is true, we stream results as they arrive
|
||||
def _on_run_complete(unique, node_output, node_status):
|
||||
nonlocal header_printed
|
||||
if stdout:
|
||||
with self.print_lock:
|
||||
if not header_printed:
|
||||
printer.console.print(Rule(name.upper(), style="header"))
|
||||
header_printed = True
|
||||
printer.node_panel(unique, node_output, node_status)
|
||||
|
||||
results = self.app.services.execution.run_commands(
|
||||
nodes_filter=nodelist,
|
||||
commands=commands,
|
||||
variables=variables,
|
||||
parallel=options.get("parallel", 10),
|
||||
timeout=options.get("timeout", 10),
|
||||
folder=folder,
|
||||
prompt=prompt,
|
||||
on_node_complete=_on_run_complete
|
||||
)
|
||||
# Final Summary
|
||||
if not stdout and not folder:
|
||||
with self.print_lock:
|
||||
printer.console.print(Rule(name.upper(), style="header"))
|
||||
for unique, data in results.items():
|
||||
output = data["output"] if isinstance(data, dict) else data
|
||||
printer.node_panel(unique, output, 0)
|
||||
|
||||
# ALWAYS show the aggregate execution summary at the end
|
||||
printer.run_summary(results)
|
||||
|
||||
elif action == "test":
|
||||
expected = script.get("expected", [])
|
||||
# Show test_panel per node ONLY if stdout is True
|
||||
def _on_test_complete(unique, node_output, node_status, node_result):
|
||||
nonlocal header_printed
|
||||
if stdout:
|
||||
with self.print_lock:
|
||||
if not header_printed:
|
||||
printer.console.print(Rule(name.upper(), style="header"))
|
||||
header_printed = True
|
||||
printer.test_panel(unique, node_output, node_status, node_result)
|
||||
|
||||
results = self.app.services.execution.test_commands(
|
||||
nodes_filter=nodelist,
|
||||
commands=commands,
|
||||
expected=expected,
|
||||
variables=variables,
|
||||
parallel=options.get("parallel", 10),
|
||||
timeout=options.get("timeout", 10),
|
||||
folder=folder,
|
||||
prompt=prompt,
|
||||
on_node_complete=_on_test_complete
|
||||
)
|
||||
# ALWAYS show the aggregate summary at the end
|
||||
printer.test_summary(results)
|
||||
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.run_handler.RunHandler.dispatch"><code class="name flex">
|
||||
<span>def <span class="ident">dispatch</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def dispatch(self, args):
|
||||
if len(args.data) > 1:
|
||||
args.action = "noderun"
|
||||
actions = {"noderun": self.node_run, "generate": self.yaml_generate, "run": self.yaml_run}
|
||||
return actions.get(args.action)(args)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.run_handler.RunHandler.node_run"><code class="name flex">
|
||||
<span>def <span class="ident">node_run</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def node_run(self, args):
|
||||
nodes_filter = args.data[0]
|
||||
commands = [" ".join(args.data[1:])]
|
||||
|
||||
try:
|
||||
header_printed = False
|
||||
|
||||
if hasattr(args, 'test_expected') and args.test_expected:
|
||||
# Mode: Test
|
||||
def _on_node_complete(unique, node_output, node_status, node_result):
|
||||
nonlocal header_printed
|
||||
with self.print_lock:
|
||||
if not header_printed:
|
||||
printer.console.print(Rule("OUTPUT", style="header"))
|
||||
header_printed = True
|
||||
printer.test_panel(unique, node_output, node_status, node_result)
|
||||
|
||||
results = self.app.services.execution.test_commands(
|
||||
nodes_filter=nodes_filter,
|
||||
commands=commands,
|
||||
expected=args.test_expected,
|
||||
on_node_complete=_on_node_complete
|
||||
)
|
||||
printer.test_summary(results)
|
||||
else:
|
||||
# Mode: Normal Run
|
||||
def _on_node_complete(unique, node_output, node_status):
|
||||
nonlocal header_printed
|
||||
with self.print_lock:
|
||||
if not header_printed:
|
||||
printer.console.print(Rule("OUTPUT", style="header"))
|
||||
header_printed = True
|
||||
printer.node_panel(unique, node_output, node_status)
|
||||
|
||||
results = self.app.services.execution.run_commands(
|
||||
nodes_filter=nodes_filter,
|
||||
commands=commands,
|
||||
on_node_complete=_on_node_complete
|
||||
)
|
||||
printer.run_summary(results)
|
||||
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.run_handler.RunHandler.yaml_generate"><code class="name flex">
|
||||
<span>def <span class="ident">yaml_generate</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def yaml_generate(self, args):
|
||||
if os.path.exists(args.data[0]):
|
||||
printer.error(f"File '{args.data[0]}' already exists.")
|
||||
sys.exit(14)
|
||||
else:
|
||||
with open(args.data[0], "w") as file:
|
||||
file.write(get_instructions("generate"))
|
||||
printer.success(f"File {args.data[0]} generated successfully")
|
||||
sys.exit()</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.run_handler.RunHandler.yaml_run"><code class="name flex">
|
||||
<span>def <span class="ident">yaml_run</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def yaml_run(self, args):
|
||||
path = args.data[0]
|
||||
try:
|
||||
with open(path, "r") as f:
|
||||
playbook = yaml.load(f, Loader=yaml.FullLoader)
|
||||
|
||||
for task in playbook.get("tasks", []):
|
||||
self.cli_run(task)
|
||||
|
||||
except Exception as e:
|
||||
printer.error(f"Failed to run playbook {path}: {e}")
|
||||
sys.exit(10)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.cli" href="index.html">connpy.cli</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-classes">Classes</a></h3>
|
||||
<ul>
|
||||
<li>
|
||||
<h4><code><a title="connpy.cli.run_handler.RunHandler" href="#connpy.cli.run_handler.RunHandler">RunHandler</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.cli.run_handler.RunHandler.cli_run" href="#connpy.cli.run_handler.RunHandler.cli_run">cli_run</a></code></li>
|
||||
<li><code><a title="connpy.cli.run_handler.RunHandler.dispatch" href="#connpy.cli.run_handler.RunHandler.dispatch">dispatch</a></code></li>
|
||||
<li><code><a title="connpy.cli.run_handler.RunHandler.node_run" href="#connpy.cli.run_handler.RunHandler.node_run">node_run</a></code></li>
|
||||
<li><code><a title="connpy.cli.run_handler.RunHandler.yaml_generate" href="#connpy.cli.run_handler.RunHandler.yaml_generate">yaml_generate</a></code></li>
|
||||
<li><code><a title="connpy.cli.run_handler.RunHandler.yaml_run" href="#connpy.cli.run_handler.RunHandler.yaml_run">yaml_run</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.6</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,433 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.6">
|
||||
<title>connpy.cli.sync_handler API documentation</title>
|
||||
<meta name="description" content="">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.cli.sync_handler</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-classes">Classes</h2>
|
||||
<dl>
|
||||
<dt id="connpy.cli.sync_handler.SyncHandler"><code class="flex name class">
|
||||
<span>class <span class="ident">SyncHandler</span></span>
|
||||
<span>(</span><span>app)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">class SyncHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
|
||||
def dispatch(self, args):
|
||||
action = getattr(args, "action", None)
|
||||
actions = {
|
||||
"login": self.login,
|
||||
"logout": self.logout,
|
||||
"status": self.status,
|
||||
"list": self.list_backups,
|
||||
"once": self.once,
|
||||
"restore": self.restore,
|
||||
"start": self.start,
|
||||
"stop": self.stop
|
||||
}
|
||||
handler = actions.get(action)
|
||||
if handler:
|
||||
return handler(args)
|
||||
|
||||
return self.status(args)
|
||||
|
||||
def login(self, args):
|
||||
self.app.services.sync.login()
|
||||
|
||||
def logout(self, args):
|
||||
self.app.services.sync.logout()
|
||||
|
||||
def status(self, args):
|
||||
status = self.app.services.sync.check_login_status()
|
||||
enabled = self.app.services.sync.sync_enabled
|
||||
remote = self.app.services.sync.sync_remote
|
||||
|
||||
printer.info(f"Login Status: {status}")
|
||||
printer.info(f"Auto-Sync: {'Enabled' if enabled else 'Disabled'}")
|
||||
printer.info(f"Sync Remote Nodes: {'Yes' if remote else 'No'}")
|
||||
|
||||
def list_backups(self, args):
|
||||
backups = self.app.services.sync.list_backups()
|
||||
if backups:
|
||||
yaml_output = yaml.dump(backups, sort_keys=False, default_flow_style=False)
|
||||
printer.custom("backups", "")
|
||||
print(yaml_output)
|
||||
else:
|
||||
printer.info("No backups found or not logged in.")
|
||||
|
||||
def once(self, args):
|
||||
# Manual backup. We check if we should include remote nodes
|
||||
remote_data = None
|
||||
if self.app.services.sync.sync_remote and self.app.services.mode == "remote":
|
||||
inventory = self.app.services.nodes.get_inventory()
|
||||
# Merge with local settings
|
||||
local_settings = self.app.services.config_svc.get_settings()
|
||||
local_settings.pop("configfolder", None)
|
||||
|
||||
# Maintain proper config structure: {config: {}, connections: {}, profiles: {}}
|
||||
remote_data = {
|
||||
"config": local_settings,
|
||||
"connections": inventory.get("connections", {}),
|
||||
"profiles": inventory.get("profiles", {})
|
||||
}
|
||||
|
||||
if self.app.services.sync.compress_and_upload(remote_data):
|
||||
printer.success("Manual backup completed.")
|
||||
|
||||
def restore(self, args):
|
||||
import inquirer
|
||||
file_id = getattr(args, "id", None)
|
||||
|
||||
# Segmented flags
|
||||
restore_config = getattr(args, "restore_config", False)
|
||||
restore_nodes = getattr(args, "restore_nodes", False)
|
||||
|
||||
# If neither is specified, we restore ALL (backwards compatibility)
|
||||
if not restore_config and not restore_nodes:
|
||||
restore_config = True
|
||||
restore_nodes = True
|
||||
|
||||
# 1. Analyze what we are about to restore
|
||||
info = self.app.services.sync.analyze_backup_content(file_id)
|
||||
if not info:
|
||||
printer.error("Could not analyze backup content.")
|
||||
return
|
||||
|
||||
# 2. Show detailed info
|
||||
printer.info("Restoration Details:")
|
||||
if restore_config:
|
||||
print(f" - Local Settings: Yes")
|
||||
print(f" - RSA Key (.osk): {'Yes' if info['has_key'] else 'No'}")
|
||||
if restore_nodes:
|
||||
target = "REMOTE" if self.app.services.mode == "remote" else "LOCAL"
|
||||
print(f" - Nodes: {info['nodes']}")
|
||||
print(f" - Folders: {info['folders']}")
|
||||
print(f" - Profiles: {info['profiles']}")
|
||||
print(f" - Destination: {target}")
|
||||
print("")
|
||||
|
||||
questions = [inquirer.Confirm("confirm", message="Do you want to proceed with the restoration?", default=False)]
|
||||
answers = inquirer.prompt(questions)
|
||||
|
||||
if not answers or not answers["confirm"]:
|
||||
printer.info("Restore cancelled.")
|
||||
return
|
||||
|
||||
# 3. Perform the actual restore
|
||||
if self.app.services.sync.restore_backup(
|
||||
file_id=file_id,
|
||||
restore_config=restore_config,
|
||||
restore_nodes=restore_nodes,
|
||||
app_instance=self.app
|
||||
):
|
||||
printer.success("Restore completed successfully.")
|
||||
|
||||
def start(self, args):
|
||||
self.app.services.config_svc.update_setting("sync", True)
|
||||
self.app.services.sync.sync_enabled = True
|
||||
printer.success("Auto-sync enabled.")
|
||||
|
||||
def stop(self, args):
|
||||
self.app.services.config_svc.update_setting("sync", False)
|
||||
self.app.services.sync.sync_enabled = False
|
||||
printer.success("Auto-sync disabled.")</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
<h3>Methods</h3>
|
||||
<dl>
|
||||
<dt id="connpy.cli.sync_handler.SyncHandler.dispatch"><code class="name flex">
|
||||
<span>def <span class="ident">dispatch</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def dispatch(self, args):
|
||||
action = getattr(args, "action", None)
|
||||
actions = {
|
||||
"login": self.login,
|
||||
"logout": self.logout,
|
||||
"status": self.status,
|
||||
"list": self.list_backups,
|
||||
"once": self.once,
|
||||
"restore": self.restore,
|
||||
"start": self.start,
|
||||
"stop": self.stop
|
||||
}
|
||||
handler = actions.get(action)
|
||||
if handler:
|
||||
return handler(args)
|
||||
|
||||
return self.status(args)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.sync_handler.SyncHandler.list_backups"><code class="name flex">
|
||||
<span>def <span class="ident">list_backups</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def list_backups(self, args):
|
||||
backups = self.app.services.sync.list_backups()
|
||||
if backups:
|
||||
yaml_output = yaml.dump(backups, sort_keys=False, default_flow_style=False)
|
||||
printer.custom("backups", "")
|
||||
print(yaml_output)
|
||||
else:
|
||||
printer.info("No backups found or not logged in.")</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.sync_handler.SyncHandler.login"><code class="name flex">
|
||||
<span>def <span class="ident">login</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def login(self, args):
|
||||
self.app.services.sync.login()</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.sync_handler.SyncHandler.logout"><code class="name flex">
|
||||
<span>def <span class="ident">logout</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def logout(self, args):
|
||||
self.app.services.sync.logout()</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.sync_handler.SyncHandler.once"><code class="name flex">
|
||||
<span>def <span class="ident">once</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def once(self, args):
|
||||
# Manual backup. We check if we should include remote nodes
|
||||
remote_data = None
|
||||
if self.app.services.sync.sync_remote and self.app.services.mode == "remote":
|
||||
inventory = self.app.services.nodes.get_inventory()
|
||||
# Merge with local settings
|
||||
local_settings = self.app.services.config_svc.get_settings()
|
||||
local_settings.pop("configfolder", None)
|
||||
|
||||
# Maintain proper config structure: {config: {}, connections: {}, profiles: {}}
|
||||
remote_data = {
|
||||
"config": local_settings,
|
||||
"connections": inventory.get("connections", {}),
|
||||
"profiles": inventory.get("profiles", {})
|
||||
}
|
||||
|
||||
if self.app.services.sync.compress_and_upload(remote_data):
|
||||
printer.success("Manual backup completed.")</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.sync_handler.SyncHandler.restore"><code class="name flex">
|
||||
<span>def <span class="ident">restore</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def restore(self, args):
|
||||
import inquirer
|
||||
file_id = getattr(args, "id", None)
|
||||
|
||||
# Segmented flags
|
||||
restore_config = getattr(args, "restore_config", False)
|
||||
restore_nodes = getattr(args, "restore_nodes", False)
|
||||
|
||||
# If neither is specified, we restore ALL (backwards compatibility)
|
||||
if not restore_config and not restore_nodes:
|
||||
restore_config = True
|
||||
restore_nodes = True
|
||||
|
||||
# 1. Analyze what we are about to restore
|
||||
info = self.app.services.sync.analyze_backup_content(file_id)
|
||||
if not info:
|
||||
printer.error("Could not analyze backup content.")
|
||||
return
|
||||
|
||||
# 2. Show detailed info
|
||||
printer.info("Restoration Details:")
|
||||
if restore_config:
|
||||
print(f" - Local Settings: Yes")
|
||||
print(f" - RSA Key (.osk): {'Yes' if info['has_key'] else 'No'}")
|
||||
if restore_nodes:
|
||||
target = "REMOTE" if self.app.services.mode == "remote" else "LOCAL"
|
||||
print(f" - Nodes: {info['nodes']}")
|
||||
print(f" - Folders: {info['folders']}")
|
||||
print(f" - Profiles: {info['profiles']}")
|
||||
print(f" - Destination: {target}")
|
||||
print("")
|
||||
|
||||
questions = [inquirer.Confirm("confirm", message="Do you want to proceed with the restoration?", default=False)]
|
||||
answers = inquirer.prompt(questions)
|
||||
|
||||
if not answers or not answers["confirm"]:
|
||||
printer.info("Restore cancelled.")
|
||||
return
|
||||
|
||||
# 3. Perform the actual restore
|
||||
if self.app.services.sync.restore_backup(
|
||||
file_id=file_id,
|
||||
restore_config=restore_config,
|
||||
restore_nodes=restore_nodes,
|
||||
app_instance=self.app
|
||||
):
|
||||
printer.success("Restore completed successfully.")</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.sync_handler.SyncHandler.start"><code class="name flex">
|
||||
<span>def <span class="ident">start</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def start(self, args):
|
||||
self.app.services.config_svc.update_setting("sync", True)
|
||||
self.app.services.sync.sync_enabled = True
|
||||
printer.success("Auto-sync enabled.")</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.sync_handler.SyncHandler.status"><code class="name flex">
|
||||
<span>def <span class="ident">status</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def status(self, args):
|
||||
status = self.app.services.sync.check_login_status()
|
||||
enabled = self.app.services.sync.sync_enabled
|
||||
remote = self.app.services.sync.sync_remote
|
||||
|
||||
printer.info(f"Login Status: {status}")
|
||||
printer.info(f"Auto-Sync: {'Enabled' if enabled else 'Disabled'}")
|
||||
printer.info(f"Sync Remote Nodes: {'Yes' if remote else 'No'}")</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.sync_handler.SyncHandler.stop"><code class="name flex">
|
||||
<span>def <span class="ident">stop</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def stop(self, args):
|
||||
self.app.services.config_svc.update_setting("sync", False)
|
||||
self.app.services.sync.sync_enabled = False
|
||||
printer.success("Auto-sync disabled.")</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.cli" href="index.html">connpy.cli</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-classes">Classes</a></h3>
|
||||
<ul>
|
||||
<li>
|
||||
<h4><code><a title="connpy.cli.sync_handler.SyncHandler" href="#connpy.cli.sync_handler.SyncHandler">SyncHandler</a></code></h4>
|
||||
<ul class="two-column">
|
||||
<li><code><a title="connpy.cli.sync_handler.SyncHandler.dispatch" href="#connpy.cli.sync_handler.SyncHandler.dispatch">dispatch</a></code></li>
|
||||
<li><code><a title="connpy.cli.sync_handler.SyncHandler.list_backups" href="#connpy.cli.sync_handler.SyncHandler.list_backups">list_backups</a></code></li>
|
||||
<li><code><a title="connpy.cli.sync_handler.SyncHandler.login" href="#connpy.cli.sync_handler.SyncHandler.login">login</a></code></li>
|
||||
<li><code><a title="connpy.cli.sync_handler.SyncHandler.logout" href="#connpy.cli.sync_handler.SyncHandler.logout">logout</a></code></li>
|
||||
<li><code><a title="connpy.cli.sync_handler.SyncHandler.once" href="#connpy.cli.sync_handler.SyncHandler.once">once</a></code></li>
|
||||
<li><code><a title="connpy.cli.sync_handler.SyncHandler.restore" href="#connpy.cli.sync_handler.SyncHandler.restore">restore</a></code></li>
|
||||
<li><code><a title="connpy.cli.sync_handler.SyncHandler.start" href="#connpy.cli.sync_handler.SyncHandler.start">start</a></code></li>
|
||||
<li><code><a title="connpy.cli.sync_handler.SyncHandler.status" href="#connpy.cli.sync_handler.SyncHandler.status">status</a></code></li>
|
||||
<li><code><a title="connpy.cli.sync_handler.SyncHandler.stop" href="#connpy.cli.sync_handler.SyncHandler.stop">stop</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.6</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,514 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.6">
|
||||
<title>connpy.cli.validators API documentation</title>
|
||||
<meta name="description" content="">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.cli.validators</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-classes">Classes</h2>
|
||||
<dl>
|
||||
<dt id="connpy.cli.validators.Validators"><code class="flex name class">
|
||||
<span>class <span class="ident">Validators</span></span>
|
||||
<span>(</span><span>app)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">class Validators:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
|
||||
def host_validation(self, answers, current, regex = "^.+$"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Host cannot be empty")
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
return True
|
||||
|
||||
def profile_protocol_validation(self, answers, current, regex = "(^ssh$|^telnet$|^kubectl$|^docker$|^ssm$|^$)"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Pick between ssh, telnet, kubectl, docker, ssm or leave empty")
|
||||
return True
|
||||
|
||||
def protocol_validation(self, answers, current, regex = "(^ssh$|^telnet$|^kubectl$|^docker$|^ssm$|^$|^@.+$)"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Pick between ssh, telnet, kubectl, docker, ssm, leave empty or @profile")
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
return True
|
||||
|
||||
def profile_port_validation(self, answers, current, regex = "(^[0-9]*$)"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Pick a port between 1-65535, @profile o leave empty")
|
||||
try:
|
||||
port = int(current)
|
||||
except ValueError:
|
||||
port = 0
|
||||
if current != "" and not 1 <= int(port) <= 65535:
|
||||
raise inquirer.errors.ValidationError("", reason="Pick a port between 1-65535 or leave empty")
|
||||
return True
|
||||
|
||||
def port_validation(self, answers, current, regex = "(^[0-9]*$|^@.+$)"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Pick a port between 1-65535, @profile or leave empty")
|
||||
try:
|
||||
port = int(current)
|
||||
except ValueError:
|
||||
port = 0
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
elif current != "" and not 1 <= int(port) <= 65535:
|
||||
raise inquirer.errors.ValidationError("", reason="Pick a port between 1-65535, @profile o leave empty")
|
||||
return True
|
||||
|
||||
def pass_validation(self, answers, current, regex = "(^@.+$)"):
|
||||
profiles = current.split(",")
|
||||
for i in profiles:
|
||||
if not re.match(regex, i) or i[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(i))
|
||||
return True
|
||||
|
||||
def tags_validation(self, answers, current):
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
elif current != "":
|
||||
isdict = False
|
||||
try:
|
||||
isdict = ast.literal_eval(current)
|
||||
except Exception:
|
||||
pass
|
||||
if not isinstance (isdict, dict):
|
||||
raise inquirer.errors.ValidationError("", reason="Tags should be a python dictionary.".format(current))
|
||||
return True
|
||||
|
||||
def profile_tags_validation(self, answers, current):
|
||||
if current != "":
|
||||
isdict = False
|
||||
try:
|
||||
isdict = ast.literal_eval(current)
|
||||
except Exception:
|
||||
pass
|
||||
if not isinstance (isdict, dict):
|
||||
raise inquirer.errors.ValidationError("", reason="Tags should be a python dictionary.".format(current))
|
||||
return True
|
||||
|
||||
def jumphost_validation(self, answers, current):
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
elif current != "":
|
||||
if current not in self.app.nodes_list:
|
||||
raise inquirer.errors.ValidationError("", reason="Node {} don't exist.".format(current))
|
||||
return True
|
||||
|
||||
def profile_jumphost_validation(self, answers, current):
|
||||
if current != "":
|
||||
if current not in self.app.nodes_list:
|
||||
raise inquirer.errors.ValidationError("", reason="Node {} don't exist.".format(current))
|
||||
return True
|
||||
|
||||
def default_validation(self, answers, current):
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
return True
|
||||
|
||||
def bulk_node_validation(self, answers, current, regex = "^[0-9a-zA-Z_.,$#-]+$"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Host cannot be empty")
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
return True
|
||||
|
||||
def bulk_folder_validation(self, answers, current):
|
||||
if not self.app.case:
|
||||
current = current.lower()
|
||||
|
||||
candidate = current
|
||||
if "/" in current:
|
||||
candidate = current.split("/")[0]
|
||||
|
||||
matches = list(filter(lambda k: k == candidate, self.app.folders))
|
||||
if current != "" and len(matches) == 0:
|
||||
raise inquirer.errors.ValidationError("", reason="Location {} don't exist".format(current))
|
||||
return True
|
||||
|
||||
def bulk_host_validation(self, answers, current, regex = "^.+$"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Host cannot be empty")
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
hosts = current.split(",")
|
||||
nodes = answers["ids"].split(",")
|
||||
if len(hosts) > 1 and len(hosts) != len(nodes):
|
||||
raise inquirer.errors.ValidationError("", reason="Hosts list should be the same length of nodes list")
|
||||
return True</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
<h3>Methods</h3>
|
||||
<dl>
|
||||
<dt id="connpy.cli.validators.Validators.bulk_folder_validation"><code class="name flex">
|
||||
<span>def <span class="ident">bulk_folder_validation</span></span>(<span>self, answers, current)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def bulk_folder_validation(self, answers, current):
|
||||
if not self.app.case:
|
||||
current = current.lower()
|
||||
|
||||
candidate = current
|
||||
if "/" in current:
|
||||
candidate = current.split("/")[0]
|
||||
|
||||
matches = list(filter(lambda k: k == candidate, self.app.folders))
|
||||
if current != "" and len(matches) == 0:
|
||||
raise inquirer.errors.ValidationError("", reason="Location {} don't exist".format(current))
|
||||
return True</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.validators.Validators.bulk_host_validation"><code class="name flex">
|
||||
<span>def <span class="ident">bulk_host_validation</span></span>(<span>self, answers, current, regex='^.+$')</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def bulk_host_validation(self, answers, current, regex = "^.+$"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Host cannot be empty")
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
hosts = current.split(",")
|
||||
nodes = answers["ids"].split(",")
|
||||
if len(hosts) > 1 and len(hosts) != len(nodes):
|
||||
raise inquirer.errors.ValidationError("", reason="Hosts list should be the same length of nodes list")
|
||||
return True</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.validators.Validators.bulk_node_validation"><code class="name flex">
|
||||
<span>def <span class="ident">bulk_node_validation</span></span>(<span>self, answers, current, regex='^[0-9a-zA-Z_.,$#-]+$')</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def bulk_node_validation(self, answers, current, regex = "^[0-9a-zA-Z_.,$#-]+$"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Host cannot be empty")
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
return True</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.validators.Validators.default_validation"><code class="name flex">
|
||||
<span>def <span class="ident">default_validation</span></span>(<span>self, answers, current)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def default_validation(self, answers, current):
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
return True</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.validators.Validators.host_validation"><code class="name flex">
|
||||
<span>def <span class="ident">host_validation</span></span>(<span>self, answers, current, regex='^.+$')</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def host_validation(self, answers, current, regex = "^.+$"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Host cannot be empty")
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
return True</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.validators.Validators.jumphost_validation"><code class="name flex">
|
||||
<span>def <span class="ident">jumphost_validation</span></span>(<span>self, answers, current)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def jumphost_validation(self, answers, current):
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
elif current != "":
|
||||
if current not in self.app.nodes_list:
|
||||
raise inquirer.errors.ValidationError("", reason="Node {} don't exist.".format(current))
|
||||
return True</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.validators.Validators.pass_validation"><code class="name flex">
|
||||
<span>def <span class="ident">pass_validation</span></span>(<span>self, answers, current, regex='(^@.+$)')</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def pass_validation(self, answers, current, regex = "(^@.+$)"):
|
||||
profiles = current.split(",")
|
||||
for i in profiles:
|
||||
if not re.match(regex, i) or i[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(i))
|
||||
return True</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.validators.Validators.port_validation"><code class="name flex">
|
||||
<span>def <span class="ident">port_validation</span></span>(<span>self, answers, current, regex='(^[0-9]*$|^@.+$)')</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def port_validation(self, answers, current, regex = "(^[0-9]*$|^@.+$)"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Pick a port between 1-65535, @profile or leave empty")
|
||||
try:
|
||||
port = int(current)
|
||||
except ValueError:
|
||||
port = 0
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
elif current != "" and not 1 <= int(port) <= 65535:
|
||||
raise inquirer.errors.ValidationError("", reason="Pick a port between 1-65535, @profile o leave empty")
|
||||
return True</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.validators.Validators.profile_jumphost_validation"><code class="name flex">
|
||||
<span>def <span class="ident">profile_jumphost_validation</span></span>(<span>self, answers, current)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def profile_jumphost_validation(self, answers, current):
|
||||
if current != "":
|
||||
if current not in self.app.nodes_list:
|
||||
raise inquirer.errors.ValidationError("", reason="Node {} don't exist.".format(current))
|
||||
return True</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.validators.Validators.profile_port_validation"><code class="name flex">
|
||||
<span>def <span class="ident">profile_port_validation</span></span>(<span>self, answers, current, regex='(^[0-9]*$)')</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def profile_port_validation(self, answers, current, regex = "(^[0-9]*$)"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Pick a port between 1-65535, @profile o leave empty")
|
||||
try:
|
||||
port = int(current)
|
||||
except ValueError:
|
||||
port = 0
|
||||
if current != "" and not 1 <= int(port) <= 65535:
|
||||
raise inquirer.errors.ValidationError("", reason="Pick a port between 1-65535 or leave empty")
|
||||
return True</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.validators.Validators.profile_protocol_validation"><code class="name flex">
|
||||
<span>def <span class="ident">profile_protocol_validation</span></span>(<span>self, answers, current, regex='(^ssh$|^telnet$|^kubectl$|^docker$|^ssm$|^$)')</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def profile_protocol_validation(self, answers, current, regex = "(^ssh$|^telnet$|^kubectl$|^docker$|^ssm$|^$)"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Pick between ssh, telnet, kubectl, docker, ssm or leave empty")
|
||||
return True</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.validators.Validators.profile_tags_validation"><code class="name flex">
|
||||
<span>def <span class="ident">profile_tags_validation</span></span>(<span>self, answers, current)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def profile_tags_validation(self, answers, current):
|
||||
if current != "":
|
||||
isdict = False
|
||||
try:
|
||||
isdict = ast.literal_eval(current)
|
||||
except Exception:
|
||||
pass
|
||||
if not isinstance (isdict, dict):
|
||||
raise inquirer.errors.ValidationError("", reason="Tags should be a python dictionary.".format(current))
|
||||
return True</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.validators.Validators.protocol_validation"><code class="name flex">
|
||||
<span>def <span class="ident">protocol_validation</span></span>(<span>self,<br>answers,<br>current,<br>regex='(^ssh$|^telnet$|^kubectl$|^docker$|^ssm$|^$|^@.+$)')</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def protocol_validation(self, answers, current, regex = "(^ssh$|^telnet$|^kubectl$|^docker$|^ssm$|^$|^@.+$)"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Pick between ssh, telnet, kubectl, docker, ssm, leave empty or @profile")
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
return True</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.validators.Validators.tags_validation"><code class="name flex">
|
||||
<span>def <span class="ident">tags_validation</span></span>(<span>self, answers, current)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def tags_validation(self, answers, current):
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
elif current != "":
|
||||
isdict = False
|
||||
try:
|
||||
isdict = ast.literal_eval(current)
|
||||
except Exception:
|
||||
pass
|
||||
if not isinstance (isdict, dict):
|
||||
raise inquirer.errors.ValidationError("", reason="Tags should be a python dictionary.".format(current))
|
||||
return True</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.cli" href="index.html">connpy.cli</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-classes">Classes</a></h3>
|
||||
<ul>
|
||||
<li>
|
||||
<h4><code><a title="connpy.cli.validators.Validators" href="#connpy.cli.validators.Validators">Validators</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.cli.validators.Validators.bulk_folder_validation" href="#connpy.cli.validators.Validators.bulk_folder_validation">bulk_folder_validation</a></code></li>
|
||||
<li><code><a title="connpy.cli.validators.Validators.bulk_host_validation" href="#connpy.cli.validators.Validators.bulk_host_validation">bulk_host_validation</a></code></li>
|
||||
<li><code><a title="connpy.cli.validators.Validators.bulk_node_validation" href="#connpy.cli.validators.Validators.bulk_node_validation">bulk_node_validation</a></code></li>
|
||||
<li><code><a title="connpy.cli.validators.Validators.default_validation" href="#connpy.cli.validators.Validators.default_validation">default_validation</a></code></li>
|
||||
<li><code><a title="connpy.cli.validators.Validators.host_validation" href="#connpy.cli.validators.Validators.host_validation">host_validation</a></code></li>
|
||||
<li><code><a title="connpy.cli.validators.Validators.jumphost_validation" href="#connpy.cli.validators.Validators.jumphost_validation">jumphost_validation</a></code></li>
|
||||
<li><code><a title="connpy.cli.validators.Validators.pass_validation" href="#connpy.cli.validators.Validators.pass_validation">pass_validation</a></code></li>
|
||||
<li><code><a title="connpy.cli.validators.Validators.port_validation" href="#connpy.cli.validators.Validators.port_validation">port_validation</a></code></li>
|
||||
<li><code><a title="connpy.cli.validators.Validators.profile_jumphost_validation" href="#connpy.cli.validators.Validators.profile_jumphost_validation">profile_jumphost_validation</a></code></li>
|
||||
<li><code><a title="connpy.cli.validators.Validators.profile_port_validation" href="#connpy.cli.validators.Validators.profile_port_validation">profile_port_validation</a></code></li>
|
||||
<li><code><a title="connpy.cli.validators.Validators.profile_protocol_validation" href="#connpy.cli.validators.Validators.profile_protocol_validation">profile_protocol_validation</a></code></li>
|
||||
<li><code><a title="connpy.cli.validators.Validators.profile_tags_validation" href="#connpy.cli.validators.Validators.profile_tags_validation">profile_tags_validation</a></code></li>
|
||||
<li><code><a title="connpy.cli.validators.Validators.protocol_validation" href="#connpy.cli.validators.Validators.protocol_validation">protocol_validation</a></code></li>
|
||||
<li><code><a title="connpy.cli.validators.Validators.tags_validation" href="#connpy.cli.validators.Validators.tags_validation">tags_validation</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.6</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,799 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.5">
|
||||
<title>connpy.grpc.connpy_pb2 API documentation</title>
|
||||
<meta name="description" content="Generated protocol buffer code.">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.grpc.connpy_pb2</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
<p>Generated protocol buffer code.</p>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-classes">Classes</h2>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.connpy_pb2.AIResponse"><code class="flex name class">
|
||||
<span>class <span class="ident">AIResponse</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.connpy_pb2.AIResponse.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.connpy_pb2.AskRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">AskRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.connpy_pb2.AskRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.connpy_pb2.BoolResponse"><code class="flex name class">
|
||||
<span>class <span class="ident">BoolResponse</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.connpy_pb2.BoolResponse.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.connpy_pb2.BulkRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">BulkRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.connpy_pb2.BulkRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.connpy_pb2.DeleteRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">DeleteRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.connpy_pb2.DeleteRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.connpy_pb2.ExportRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">ExportRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.connpy_pb2.ExportRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.connpy_pb2.FilterRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">FilterRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.connpy_pb2.FilterRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.connpy_pb2.FullReplaceRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">FullReplaceRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.connpy_pb2.FullReplaceRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.connpy_pb2.IdRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">IdRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.connpy_pb2.IdRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.connpy_pb2.IntRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">IntRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.connpy_pb2.IntRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.connpy_pb2.InteractRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">InteractRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.connpy_pb2.InteractRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.connpy_pb2.InteractResponse"><code class="flex name class">
|
||||
<span>class <span class="ident">InteractResponse</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.connpy_pb2.InteractResponse.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.connpy_pb2.ListRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">ListRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.connpy_pb2.ListRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.connpy_pb2.MessageValue"><code class="flex name class">
|
||||
<span>class <span class="ident">MessageValue</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.connpy_pb2.MessageValue.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.connpy_pb2.MoveRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">MoveRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.connpy_pb2.MoveRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.connpy_pb2.NodeRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">NodeRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.connpy_pb2.NodeRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.connpy_pb2.NodeRunResult"><code class="flex name class">
|
||||
<span>class <span class="ident">NodeRunResult</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.connpy_pb2.NodeRunResult.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.connpy_pb2.PluginRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">PluginRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.connpy_pb2.PluginRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.connpy_pb2.ProfileRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">ProfileRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.connpy_pb2.ProfileRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.connpy_pb2.ProviderRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">ProviderRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.connpy_pb2.ProviderRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.connpy_pb2.RunRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">RunRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.connpy_pb2.RunRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.connpy_pb2.ScriptRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">ScriptRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.connpy_pb2.ScriptRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.connpy_pb2.StringRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">StringRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.connpy_pb2.StringRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.connpy_pb2.StringResponse"><code class="flex name class">
|
||||
<span>class <span class="ident">StringResponse</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.connpy_pb2.StringResponse.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.connpy_pb2.StructRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">StructRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.connpy_pb2.StructRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.connpy_pb2.StructResponse"><code class="flex name class">
|
||||
<span>class <span class="ident">StructResponse</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.connpy_pb2.StructResponse.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.connpy_pb2.TestRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">TestRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.connpy_pb2.TestRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.connpy_pb2.UpdateRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">UpdateRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.connpy_pb2.UpdateRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.connpy_pb2.ValueResponse"><code class="flex name class">
|
||||
<span>class <span class="ident">ValueResponse</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.connpy_pb2.ValueResponse.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.grpc" href="index.html">connpy.grpc</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-classes">Classes</a></h3>
|
||||
<ul>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.connpy_pb2.AIResponse" href="#connpy.grpc.connpy_pb2.AIResponse">AIResponse</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.connpy_pb2.AIResponse.DESCRIPTOR" href="#connpy.grpc.connpy_pb2.AIResponse.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.connpy_pb2.AskRequest" href="#connpy.grpc.connpy_pb2.AskRequest">AskRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.connpy_pb2.AskRequest.DESCRIPTOR" href="#connpy.grpc.connpy_pb2.AskRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.connpy_pb2.BoolResponse" href="#connpy.grpc.connpy_pb2.BoolResponse">BoolResponse</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.connpy_pb2.BoolResponse.DESCRIPTOR" href="#connpy.grpc.connpy_pb2.BoolResponse.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.connpy_pb2.BulkRequest" href="#connpy.grpc.connpy_pb2.BulkRequest">BulkRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.connpy_pb2.BulkRequest.DESCRIPTOR" href="#connpy.grpc.connpy_pb2.BulkRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.connpy_pb2.DeleteRequest" href="#connpy.grpc.connpy_pb2.DeleteRequest">DeleteRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.connpy_pb2.DeleteRequest.DESCRIPTOR" href="#connpy.grpc.connpy_pb2.DeleteRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.connpy_pb2.ExportRequest" href="#connpy.grpc.connpy_pb2.ExportRequest">ExportRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.connpy_pb2.ExportRequest.DESCRIPTOR" href="#connpy.grpc.connpy_pb2.ExportRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.connpy_pb2.FilterRequest" href="#connpy.grpc.connpy_pb2.FilterRequest">FilterRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.connpy_pb2.FilterRequest.DESCRIPTOR" href="#connpy.grpc.connpy_pb2.FilterRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.connpy_pb2.FullReplaceRequest" href="#connpy.grpc.connpy_pb2.FullReplaceRequest">FullReplaceRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.connpy_pb2.FullReplaceRequest.DESCRIPTOR" href="#connpy.grpc.connpy_pb2.FullReplaceRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.connpy_pb2.IdRequest" href="#connpy.grpc.connpy_pb2.IdRequest">IdRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.connpy_pb2.IdRequest.DESCRIPTOR" href="#connpy.grpc.connpy_pb2.IdRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.connpy_pb2.IntRequest" href="#connpy.grpc.connpy_pb2.IntRequest">IntRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.connpy_pb2.IntRequest.DESCRIPTOR" href="#connpy.grpc.connpy_pb2.IntRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.connpy_pb2.InteractRequest" href="#connpy.grpc.connpy_pb2.InteractRequest">InteractRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.connpy_pb2.InteractRequest.DESCRIPTOR" href="#connpy.grpc.connpy_pb2.InteractRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.connpy_pb2.InteractResponse" href="#connpy.grpc.connpy_pb2.InteractResponse">InteractResponse</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.connpy_pb2.InteractResponse.DESCRIPTOR" href="#connpy.grpc.connpy_pb2.InteractResponse.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.connpy_pb2.ListRequest" href="#connpy.grpc.connpy_pb2.ListRequest">ListRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.connpy_pb2.ListRequest.DESCRIPTOR" href="#connpy.grpc.connpy_pb2.ListRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.connpy_pb2.MessageValue" href="#connpy.grpc.connpy_pb2.MessageValue">MessageValue</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.connpy_pb2.MessageValue.DESCRIPTOR" href="#connpy.grpc.connpy_pb2.MessageValue.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.connpy_pb2.MoveRequest" href="#connpy.grpc.connpy_pb2.MoveRequest">MoveRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.connpy_pb2.MoveRequest.DESCRIPTOR" href="#connpy.grpc.connpy_pb2.MoveRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.connpy_pb2.NodeRequest" href="#connpy.grpc.connpy_pb2.NodeRequest">NodeRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.connpy_pb2.NodeRequest.DESCRIPTOR" href="#connpy.grpc.connpy_pb2.NodeRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.connpy_pb2.NodeRunResult" href="#connpy.grpc.connpy_pb2.NodeRunResult">NodeRunResult</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.connpy_pb2.NodeRunResult.DESCRIPTOR" href="#connpy.grpc.connpy_pb2.NodeRunResult.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.connpy_pb2.PluginRequest" href="#connpy.grpc.connpy_pb2.PluginRequest">PluginRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.connpy_pb2.PluginRequest.DESCRIPTOR" href="#connpy.grpc.connpy_pb2.PluginRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.connpy_pb2.ProfileRequest" href="#connpy.grpc.connpy_pb2.ProfileRequest">ProfileRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.connpy_pb2.ProfileRequest.DESCRIPTOR" href="#connpy.grpc.connpy_pb2.ProfileRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.connpy_pb2.ProviderRequest" href="#connpy.grpc.connpy_pb2.ProviderRequest">ProviderRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.connpy_pb2.ProviderRequest.DESCRIPTOR" href="#connpy.grpc.connpy_pb2.ProviderRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.connpy_pb2.RunRequest" href="#connpy.grpc.connpy_pb2.RunRequest">RunRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.connpy_pb2.RunRequest.DESCRIPTOR" href="#connpy.grpc.connpy_pb2.RunRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.connpy_pb2.ScriptRequest" href="#connpy.grpc.connpy_pb2.ScriptRequest">ScriptRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.connpy_pb2.ScriptRequest.DESCRIPTOR" href="#connpy.grpc.connpy_pb2.ScriptRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.connpy_pb2.StringRequest" href="#connpy.grpc.connpy_pb2.StringRequest">StringRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.connpy_pb2.StringRequest.DESCRIPTOR" href="#connpy.grpc.connpy_pb2.StringRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.connpy_pb2.StringResponse" href="#connpy.grpc.connpy_pb2.StringResponse">StringResponse</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.connpy_pb2.StringResponse.DESCRIPTOR" href="#connpy.grpc.connpy_pb2.StringResponse.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.connpy_pb2.StructRequest" href="#connpy.grpc.connpy_pb2.StructRequest">StructRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.connpy_pb2.StructRequest.DESCRIPTOR" href="#connpy.grpc.connpy_pb2.StructRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.connpy_pb2.StructResponse" href="#connpy.grpc.connpy_pb2.StructResponse">StructResponse</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.connpy_pb2.StructResponse.DESCRIPTOR" href="#connpy.grpc.connpy_pb2.StructResponse.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.connpy_pb2.TestRequest" href="#connpy.grpc.connpy_pb2.TestRequest">TestRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.connpy_pb2.TestRequest.DESCRIPTOR" href="#connpy.grpc.connpy_pb2.TestRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.connpy_pb2.UpdateRequest" href="#connpy.grpc.connpy_pb2.UpdateRequest">UpdateRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.connpy_pb2.UpdateRequest.DESCRIPTOR" href="#connpy.grpc.connpy_pb2.UpdateRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.connpy_pb2.ValueResponse" href="#connpy.grpc.connpy_pb2.ValueResponse">ValueResponse</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.connpy_pb2.ValueResponse.DESCRIPTOR" href="#connpy.grpc.connpy_pb2.ValueResponse.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.5</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,108 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.5">
|
||||
<title>connpy.grpc API documentation</title>
|
||||
<meta name="description" content="">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.grpc</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-submodules">Sub-modules</h2>
|
||||
<dl>
|
||||
<dt><code class="name"><a title="connpy.grpc.connpy_pb2" href="connpy_pb2.html">connpy.grpc.connpy_pb2</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>Generated protocol buffer code.</p></div>
|
||||
</dd>
|
||||
<dt><code class="name"><a title="connpy.grpc.connpy_pb2_grpc" href="connpy_pb2_grpc.html">connpy.grpc.connpy_pb2_grpc</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>Client and server classes corresponding to protobuf-defined services.</p></div>
|
||||
</dd>
|
||||
<dt><code class="name"><a title="connpy.grpc.remote_plugin_pb2" href="remote_plugin_pb2.html">connpy.grpc.remote_plugin_pb2</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>Generated protocol buffer code.</p></div>
|
||||
</dd>
|
||||
<dt><code class="name"><a title="connpy.grpc.remote_plugin_pb2_grpc" href="remote_plugin_pb2_grpc.html">connpy.grpc.remote_plugin_pb2_grpc</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>Client and server classes corresponding to protobuf-defined services.</p></div>
|
||||
</dd>
|
||||
<dt><code class="name"><a title="connpy.grpc.server" href="server.html">connpy.grpc.server</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt><code class="name"><a title="connpy.grpc.stubs" href="stubs.html">connpy.grpc.stubs</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt><code class="name"><a title="connpy.grpc.utils" href="utils.html">connpy.grpc.utils</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy" href="../index.html">connpy</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-submodules">Sub-modules</a></h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.grpc.connpy_pb2" href="connpy_pb2.html">connpy.grpc.connpy_pb2</a></code></li>
|
||||
<li><code><a title="connpy.grpc.connpy_pb2_grpc" href="connpy_pb2_grpc.html">connpy.grpc.connpy_pb2_grpc</a></code></li>
|
||||
<li><code><a title="connpy.grpc.remote_plugin_pb2" href="remote_plugin_pb2.html">connpy.grpc.remote_plugin_pb2</a></code></li>
|
||||
<li><code><a title="connpy.grpc.remote_plugin_pb2_grpc" href="remote_plugin_pb2_grpc.html">connpy.grpc.remote_plugin_pb2_grpc</a></code></li>
|
||||
<li><code><a title="connpy.grpc.server" href="server.html">connpy.grpc.server</a></code></li>
|
||||
<li><code><a title="connpy.grpc.stubs" href="stubs.html">connpy.grpc.stubs</a></code></li>
|
||||
<li><code><a title="connpy.grpc.utils" href="utils.html">connpy.grpc.utils</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.5</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,174 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.5">
|
||||
<title>connpy.grpc.remote_plugin_pb2 API documentation</title>
|
||||
<meta name="description" content="Generated protocol buffer code.">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.grpc.remote_plugin_pb2</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
<p>Generated protocol buffer code.</p>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-classes">Classes</h2>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.remote_plugin_pb2.IdRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">IdRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.remote_plugin_pb2.IdRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.remote_plugin_pb2.OutputChunk"><code class="flex name class">
|
||||
<span>class <span class="ident">OutputChunk</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.remote_plugin_pb2.OutputChunk.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.remote_plugin_pb2.PluginInvokeRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">PluginInvokeRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.remote_plugin_pb2.PluginInvokeRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.remote_plugin_pb2.StringResponse"><code class="flex name class">
|
||||
<span>class <span class="ident">StringResponse</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.remote_plugin_pb2.StringResponse.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.grpc" href="index.html">connpy.grpc</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-classes">Classes</a></h3>
|
||||
<ul>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.remote_plugin_pb2.IdRequest" href="#connpy.grpc.remote_plugin_pb2.IdRequest">IdRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.remote_plugin_pb2.IdRequest.DESCRIPTOR" href="#connpy.grpc.remote_plugin_pb2.IdRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.remote_plugin_pb2.OutputChunk" href="#connpy.grpc.remote_plugin_pb2.OutputChunk">OutputChunk</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.remote_plugin_pb2.OutputChunk.DESCRIPTOR" href="#connpy.grpc.remote_plugin_pb2.OutputChunk.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.remote_plugin_pb2.PluginInvokeRequest" href="#connpy.grpc.remote_plugin_pb2.PluginInvokeRequest">PluginInvokeRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.remote_plugin_pb2.PluginInvokeRequest.DESCRIPTOR" href="#connpy.grpc.remote_plugin_pb2.PluginInvokeRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.remote_plugin_pb2.StringResponse" href="#connpy.grpc.remote_plugin_pb2.StringResponse">StringResponse</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.remote_plugin_pb2.StringResponse.DESCRIPTOR" href="#connpy.grpc.remote_plugin_pb2.StringResponse.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.5</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,372 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.5">
|
||||
<title>connpy.grpc.remote_plugin_pb2_grpc API documentation</title>
|
||||
<meta name="description" content="Client and server classes corresponding to protobuf-defined services.">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.grpc.remote_plugin_pb2_grpc</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
<p>Client and server classes corresponding to protobuf-defined services.</p>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-functions">Functions</h2>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.remote_plugin_pb2_grpc.add_RemotePluginServiceServicer_to_server"><code class="name flex">
|
||||
<span>def <span class="ident">add_RemotePluginServiceServicer_to_server</span></span>(<span>servicer, server)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def add_RemotePluginServiceServicer_to_server(servicer, server):
|
||||
rpc_method_handlers = {
|
||||
'get_plugin_source': grpc.unary_unary_rpc_method_handler(
|
||||
servicer.get_plugin_source,
|
||||
request_deserializer=remote__plugin__pb2.IdRequest.FromString,
|
||||
response_serializer=remote__plugin__pb2.StringResponse.SerializeToString,
|
||||
),
|
||||
'invoke_plugin': grpc.unary_stream_rpc_method_handler(
|
||||
servicer.invoke_plugin,
|
||||
request_deserializer=remote__plugin__pb2.PluginInvokeRequest.FromString,
|
||||
response_serializer=remote__plugin__pb2.OutputChunk.SerializeToString,
|
||||
),
|
||||
}
|
||||
generic_handler = grpc.method_handlers_generic_handler(
|
||||
'connpy_remote.RemotePluginService', rpc_method_handlers)
|
||||
server.add_generic_rpc_handlers((generic_handler,))
|
||||
server.add_registered_method_handlers('connpy_remote.RemotePluginService', rpc_method_handlers)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-classes">Classes</h2>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.remote_plugin_pb2_grpc.RemotePluginService"><code class="flex name class">
|
||||
<span>class <span class="ident">RemotePluginService</span></span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">class RemotePluginService(object):
|
||||
"""Missing associated documentation comment in .proto file."""
|
||||
|
||||
@staticmethod
|
||||
def get_plugin_source(request,
|
||||
target,
|
||||
options=(),
|
||||
channel_credentials=None,
|
||||
call_credentials=None,
|
||||
insecure=False,
|
||||
compression=None,
|
||||
wait_for_ready=None,
|
||||
timeout=None,
|
||||
metadata=None):
|
||||
return grpc.experimental.unary_unary(
|
||||
request,
|
||||
target,
|
||||
'/connpy_remote.RemotePluginService/get_plugin_source',
|
||||
remote__plugin__pb2.IdRequest.SerializeToString,
|
||||
remote__plugin__pb2.StringResponse.FromString,
|
||||
options,
|
||||
channel_credentials,
|
||||
insecure,
|
||||
call_credentials,
|
||||
compression,
|
||||
wait_for_ready,
|
||||
timeout,
|
||||
metadata,
|
||||
_registered_method=True)
|
||||
|
||||
@staticmethod
|
||||
def invoke_plugin(request,
|
||||
target,
|
||||
options=(),
|
||||
channel_credentials=None,
|
||||
call_credentials=None,
|
||||
insecure=False,
|
||||
compression=None,
|
||||
wait_for_ready=None,
|
||||
timeout=None,
|
||||
metadata=None):
|
||||
return grpc.experimental.unary_stream(
|
||||
request,
|
||||
target,
|
||||
'/connpy_remote.RemotePluginService/invoke_plugin',
|
||||
remote__plugin__pb2.PluginInvokeRequest.SerializeToString,
|
||||
remote__plugin__pb2.OutputChunk.FromString,
|
||||
options,
|
||||
channel_credentials,
|
||||
insecure,
|
||||
call_credentials,
|
||||
compression,
|
||||
wait_for_ready,
|
||||
timeout,
|
||||
metadata,
|
||||
_registered_method=True)</code></pre>
|
||||
</details>
|
||||
<div class="desc"><p>Missing associated documentation comment in .proto file.</p></div>
|
||||
<h3>Static methods</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.remote_plugin_pb2_grpc.RemotePluginService.get_plugin_source"><code class="name flex">
|
||||
<span>def <span class="ident">get_plugin_source</span></span>(<span>request,<br>target,<br>options=(),<br>channel_credentials=None,<br>call_credentials=None,<br>insecure=False,<br>compression=None,<br>wait_for_ready=None,<br>timeout=None,<br>metadata=None)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">@staticmethod
|
||||
def get_plugin_source(request,
|
||||
target,
|
||||
options=(),
|
||||
channel_credentials=None,
|
||||
call_credentials=None,
|
||||
insecure=False,
|
||||
compression=None,
|
||||
wait_for_ready=None,
|
||||
timeout=None,
|
||||
metadata=None):
|
||||
return grpc.experimental.unary_unary(
|
||||
request,
|
||||
target,
|
||||
'/connpy_remote.RemotePluginService/get_plugin_source',
|
||||
remote__plugin__pb2.IdRequest.SerializeToString,
|
||||
remote__plugin__pb2.StringResponse.FromString,
|
||||
options,
|
||||
channel_credentials,
|
||||
insecure,
|
||||
call_credentials,
|
||||
compression,
|
||||
wait_for_ready,
|
||||
timeout,
|
||||
metadata,
|
||||
_registered_method=True)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.remote_plugin_pb2_grpc.RemotePluginService.invoke_plugin"><code class="name flex">
|
||||
<span>def <span class="ident">invoke_plugin</span></span>(<span>request,<br>target,<br>options=(),<br>channel_credentials=None,<br>call_credentials=None,<br>insecure=False,<br>compression=None,<br>wait_for_ready=None,<br>timeout=None,<br>metadata=None)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">@staticmethod
|
||||
def invoke_plugin(request,
|
||||
target,
|
||||
options=(),
|
||||
channel_credentials=None,
|
||||
call_credentials=None,
|
||||
insecure=False,
|
||||
compression=None,
|
||||
wait_for_ready=None,
|
||||
timeout=None,
|
||||
metadata=None):
|
||||
return grpc.experimental.unary_stream(
|
||||
request,
|
||||
target,
|
||||
'/connpy_remote.RemotePluginService/invoke_plugin',
|
||||
remote__plugin__pb2.PluginInvokeRequest.SerializeToString,
|
||||
remote__plugin__pb2.OutputChunk.FromString,
|
||||
options,
|
||||
channel_credentials,
|
||||
insecure,
|
||||
call_credentials,
|
||||
compression,
|
||||
wait_for_ready,
|
||||
timeout,
|
||||
metadata,
|
||||
_registered_method=True)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.remote_plugin_pb2_grpc.RemotePluginServiceServicer"><code class="flex name class">
|
||||
<span>class <span class="ident">RemotePluginServiceServicer</span></span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">class RemotePluginServiceServicer(object):
|
||||
"""Missing associated documentation comment in .proto file."""
|
||||
|
||||
def get_plugin_source(self, request, context):
|
||||
"""Missing associated documentation comment in .proto file."""
|
||||
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
|
||||
context.set_details('Method not implemented!')
|
||||
raise NotImplementedError('Method not implemented!')
|
||||
|
||||
def invoke_plugin(self, request, context):
|
||||
"""Missing associated documentation comment in .proto file."""
|
||||
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
|
||||
context.set_details('Method not implemented!')
|
||||
raise NotImplementedError('Method not implemented!')</code></pre>
|
||||
</details>
|
||||
<div class="desc"><p>Missing associated documentation comment in .proto file.</p></div>
|
||||
<h3>Subclasses</h3>
|
||||
<ul class="hlist">
|
||||
<li><a title="connpy.grpc.server.PluginServicer" href="server.html#connpy.grpc.server.PluginServicer">PluginServicer</a></li>
|
||||
</ul>
|
||||
<h3>Methods</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.remote_plugin_pb2_grpc.RemotePluginServiceServicer.get_plugin_source"><code class="name flex">
|
||||
<span>def <span class="ident">get_plugin_source</span></span>(<span>self, request, context)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def get_plugin_source(self, request, context):
|
||||
"""Missing associated documentation comment in .proto file."""
|
||||
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
|
||||
context.set_details('Method not implemented!')
|
||||
raise NotImplementedError('Method not implemented!')</code></pre>
|
||||
</details>
|
||||
<div class="desc"><p>Missing associated documentation comment in .proto file.</p></div>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.remote_plugin_pb2_grpc.RemotePluginServiceServicer.invoke_plugin"><code class="name flex">
|
||||
<span>def <span class="ident">invoke_plugin</span></span>(<span>self, request, context)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def invoke_plugin(self, request, context):
|
||||
"""Missing associated documentation comment in .proto file."""
|
||||
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
|
||||
context.set_details('Method not implemented!')
|
||||
raise NotImplementedError('Method not implemented!')</code></pre>
|
||||
</details>
|
||||
<div class="desc"><p>Missing associated documentation comment in .proto file.</p></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.remote_plugin_pb2_grpc.RemotePluginServiceStub"><code class="flex name class">
|
||||
<span>class <span class="ident">RemotePluginServiceStub</span></span>
|
||||
<span>(</span><span>channel)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">class RemotePluginServiceStub(object):
|
||||
"""Missing associated documentation comment in .proto file."""
|
||||
|
||||
def __init__(self, channel):
|
||||
"""Constructor.
|
||||
|
||||
Args:
|
||||
channel: A grpc.Channel.
|
||||
"""
|
||||
self.get_plugin_source = channel.unary_unary(
|
||||
'/connpy_remote.RemotePluginService/get_plugin_source',
|
||||
request_serializer=remote__plugin__pb2.IdRequest.SerializeToString,
|
||||
response_deserializer=remote__plugin__pb2.StringResponse.FromString,
|
||||
_registered_method=True)
|
||||
self.invoke_plugin = channel.unary_stream(
|
||||
'/connpy_remote.RemotePluginService/invoke_plugin',
|
||||
request_serializer=remote__plugin__pb2.PluginInvokeRequest.SerializeToString,
|
||||
response_deserializer=remote__plugin__pb2.OutputChunk.FromString,
|
||||
_registered_method=True)</code></pre>
|
||||
</details>
|
||||
<div class="desc"><p>Missing associated documentation comment in .proto file.</p>
|
||||
<p>Constructor.</p>
|
||||
<h2 id="args">Args</h2>
|
||||
<dl>
|
||||
<dt><strong><code>channel</code></strong></dt>
|
||||
<dd>A grpc.Channel.</dd>
|
||||
</dl></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.grpc" href="index.html">connpy.grpc</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-functions">Functions</a></h3>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.remote_plugin_pb2_grpc.add_RemotePluginServiceServicer_to_server" href="#connpy.grpc.remote_plugin_pb2_grpc.add_RemotePluginServiceServicer_to_server">add_RemotePluginServiceServicer_to_server</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-classes">Classes</a></h3>
|
||||
<ul>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.remote_plugin_pb2_grpc.RemotePluginService" href="#connpy.grpc.remote_plugin_pb2_grpc.RemotePluginService">RemotePluginService</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.remote_plugin_pb2_grpc.RemotePluginService.get_plugin_source" href="#connpy.grpc.remote_plugin_pb2_grpc.RemotePluginService.get_plugin_source">get_plugin_source</a></code></li>
|
||||
<li><code><a title="connpy.grpc.remote_plugin_pb2_grpc.RemotePluginService.invoke_plugin" href="#connpy.grpc.remote_plugin_pb2_grpc.RemotePluginService.invoke_plugin">invoke_plugin</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.remote_plugin_pb2_grpc.RemotePluginServiceServicer" href="#connpy.grpc.remote_plugin_pb2_grpc.RemotePluginServiceServicer">RemotePluginServiceServicer</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.remote_plugin_pb2_grpc.RemotePluginServiceServicer.get_plugin_source" href="#connpy.grpc.remote_plugin_pb2_grpc.RemotePluginServiceServicer.get_plugin_source">get_plugin_source</a></code></li>
|
||||
<li><code><a title="connpy.grpc.remote_plugin_pb2_grpc.RemotePluginServiceServicer.invoke_plugin" href="#connpy.grpc.remote_plugin_pb2_grpc.RemotePluginServiceServicer.invoke_plugin">invoke_plugin</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc.remote_plugin_pb2_grpc.RemotePluginServiceStub" href="#connpy.grpc.remote_plugin_pb2_grpc.RemotePluginServiceStub">RemotePluginServiceStub</a></code></h4>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.5</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -1,144 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.5">
|
||||
<title>connpy.grpc.utils API documentation</title>
|
||||
<meta name="description" content="">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.grpc.utils</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-functions">Functions</h2>
|
||||
<dl>
|
||||
<dt id="connpy.grpc.utils.from_struct"><code class="name flex">
|
||||
<span>def <span class="ident">from_struct</span></span>(<span>struct)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def from_struct(struct):
|
||||
if not struct:
|
||||
return {}
|
||||
return json_format.MessageToDict(struct, preserving_proto_field_name=True)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.utils.from_value"><code class="name flex">
|
||||
<span>def <span class="ident">from_value</span></span>(<span>val)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def from_value(val):
|
||||
if not val.HasField("kind"):
|
||||
return None
|
||||
return json.loads(json_format.MessageToJson(val))</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.utils.to_struct"><code class="name flex">
|
||||
<span>def <span class="ident">to_struct</span></span>(<span>obj)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def to_struct(obj):
|
||||
if not obj:
|
||||
return Struct()
|
||||
s = Struct()
|
||||
json_format.ParseDict(obj, s)
|
||||
return s</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.grpc.utils.to_value"><code class="name flex">
|
||||
<span>def <span class="ident">to_value</span></span>(<span>obj)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def to_value(obj):
|
||||
if obj is None:
|
||||
v = Value()
|
||||
v.null_value = 0
|
||||
return v
|
||||
json_str = json.dumps(obj)
|
||||
v = Value()
|
||||
json_format.Parse(json_str, v)
|
||||
return v</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.grpc" href="index.html">connpy.grpc</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-functions">Functions</a></h3>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc.utils.from_struct" href="#connpy.grpc.utils.from_struct">from_struct</a></code></li>
|
||||
<li><code><a title="connpy.grpc.utils.from_value" href="#connpy.grpc.utils.from_value">from_value</a></code></li>
|
||||
<li><code><a title="connpy.grpc.utils.to_struct" href="#connpy.grpc.utils.to_struct">to_struct</a></code></li>
|
||||
<li><code><a title="connpy.grpc.utils.to_value" href="#connpy.grpc.utils.to_value">to_value</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.5</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,799 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.6">
|
||||
<title>connpy.grpc_layer.connpy_pb2 API documentation</title>
|
||||
<meta name="description" content="Generated protocol buffer code.">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.grpc_layer.connpy_pb2</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
<p>Generated protocol buffer code.</p>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-classes">Classes</h2>
|
||||
<dl>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.AIResponse"><code class="flex name class">
|
||||
<span>class <span class="ident">AIResponse</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.AIResponse.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>The type of the None singleton.</p></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.AskRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">AskRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.AskRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>The type of the None singleton.</p></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.BoolResponse"><code class="flex name class">
|
||||
<span>class <span class="ident">BoolResponse</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.BoolResponse.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>The type of the None singleton.</p></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.BulkRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">BulkRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.BulkRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>The type of the None singleton.</p></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.DeleteRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">DeleteRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.DeleteRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>The type of the None singleton.</p></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.ExportRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">ExportRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.ExportRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>The type of the None singleton.</p></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.FilterRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">FilterRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.FilterRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>The type of the None singleton.</p></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.FullReplaceRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">FullReplaceRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.FullReplaceRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>The type of the None singleton.</p></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.IdRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">IdRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.IdRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>The type of the None singleton.</p></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.IntRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">IntRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.IntRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>The type of the None singleton.</p></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.InteractRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">InteractRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.InteractRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>The type of the None singleton.</p></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.InteractResponse"><code class="flex name class">
|
||||
<span>class <span class="ident">InteractResponse</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.InteractResponse.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>The type of the None singleton.</p></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.ListRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">ListRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.ListRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>The type of the None singleton.</p></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.MessageValue"><code class="flex name class">
|
||||
<span>class <span class="ident">MessageValue</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.MessageValue.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>The type of the None singleton.</p></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.MoveRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">MoveRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.MoveRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>The type of the None singleton.</p></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.NodeRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">NodeRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.NodeRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>The type of the None singleton.</p></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.NodeRunResult"><code class="flex name class">
|
||||
<span>class <span class="ident">NodeRunResult</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.NodeRunResult.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>The type of the None singleton.</p></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.PluginRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">PluginRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.PluginRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>The type of the None singleton.</p></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.ProfileRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">ProfileRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.ProfileRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>The type of the None singleton.</p></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.ProviderRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">ProviderRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.ProviderRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>The type of the None singleton.</p></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.RunRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">RunRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.RunRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>The type of the None singleton.</p></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.ScriptRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">ScriptRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.ScriptRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>The type of the None singleton.</p></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.StringRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">StringRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.StringRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>The type of the None singleton.</p></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.StringResponse"><code class="flex name class">
|
||||
<span>class <span class="ident">StringResponse</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.StringResponse.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>The type of the None singleton.</p></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.StructRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">StructRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.StructRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>The type of the None singleton.</p></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.StructResponse"><code class="flex name class">
|
||||
<span>class <span class="ident">StructResponse</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.StructResponse.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>The type of the None singleton.</p></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.TestRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">TestRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.TestRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>The type of the None singleton.</p></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.UpdateRequest"><code class="flex name class">
|
||||
<span>class <span class="ident">UpdateRequest</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.UpdateRequest.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>The type of the None singleton.</p></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.ValueResponse"><code class="flex name class">
|
||||
<span>class <span class="ident">ValueResponse</span></span>
|
||||
<span>(</span><span>*args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>A ProtocolMessage</p></div>
|
||||
<h3>Ancestors</h3>
|
||||
<ul class="hlist">
|
||||
<li>google._upb._message.Message</li>
|
||||
<li>google.protobuf.message.Message</li>
|
||||
</ul>
|
||||
<h3>Class variables</h3>
|
||||
<dl>
|
||||
<dt id="connpy.grpc_layer.connpy_pb2.ValueResponse.DESCRIPTOR"><code class="name">var <span class="ident">DESCRIPTOR</span></code></dt>
|
||||
<dd>
|
||||
<div class="desc"><p>The type of the None singleton.</p></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.grpc_layer" href="index.html">connpy.grpc_layer</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-classes">Classes</a></h3>
|
||||
<ul>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc_layer.connpy_pb2.AIResponse" href="#connpy.grpc_layer.connpy_pb2.AIResponse">AIResponse</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc_layer.connpy_pb2.AIResponse.DESCRIPTOR" href="#connpy.grpc_layer.connpy_pb2.AIResponse.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc_layer.connpy_pb2.AskRequest" href="#connpy.grpc_layer.connpy_pb2.AskRequest">AskRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc_layer.connpy_pb2.AskRequest.DESCRIPTOR" href="#connpy.grpc_layer.connpy_pb2.AskRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc_layer.connpy_pb2.BoolResponse" href="#connpy.grpc_layer.connpy_pb2.BoolResponse">BoolResponse</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc_layer.connpy_pb2.BoolResponse.DESCRIPTOR" href="#connpy.grpc_layer.connpy_pb2.BoolResponse.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc_layer.connpy_pb2.BulkRequest" href="#connpy.grpc_layer.connpy_pb2.BulkRequest">BulkRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc_layer.connpy_pb2.BulkRequest.DESCRIPTOR" href="#connpy.grpc_layer.connpy_pb2.BulkRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc_layer.connpy_pb2.DeleteRequest" href="#connpy.grpc_layer.connpy_pb2.DeleteRequest">DeleteRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc_layer.connpy_pb2.DeleteRequest.DESCRIPTOR" href="#connpy.grpc_layer.connpy_pb2.DeleteRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc_layer.connpy_pb2.ExportRequest" href="#connpy.grpc_layer.connpy_pb2.ExportRequest">ExportRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc_layer.connpy_pb2.ExportRequest.DESCRIPTOR" href="#connpy.grpc_layer.connpy_pb2.ExportRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc_layer.connpy_pb2.FilterRequest" href="#connpy.grpc_layer.connpy_pb2.FilterRequest">FilterRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc_layer.connpy_pb2.FilterRequest.DESCRIPTOR" href="#connpy.grpc_layer.connpy_pb2.FilterRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc_layer.connpy_pb2.FullReplaceRequest" href="#connpy.grpc_layer.connpy_pb2.FullReplaceRequest">FullReplaceRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc_layer.connpy_pb2.FullReplaceRequest.DESCRIPTOR" href="#connpy.grpc_layer.connpy_pb2.FullReplaceRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc_layer.connpy_pb2.IdRequest" href="#connpy.grpc_layer.connpy_pb2.IdRequest">IdRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc_layer.connpy_pb2.IdRequest.DESCRIPTOR" href="#connpy.grpc_layer.connpy_pb2.IdRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc_layer.connpy_pb2.IntRequest" href="#connpy.grpc_layer.connpy_pb2.IntRequest">IntRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc_layer.connpy_pb2.IntRequest.DESCRIPTOR" href="#connpy.grpc_layer.connpy_pb2.IntRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc_layer.connpy_pb2.InteractRequest" href="#connpy.grpc_layer.connpy_pb2.InteractRequest">InteractRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc_layer.connpy_pb2.InteractRequest.DESCRIPTOR" href="#connpy.grpc_layer.connpy_pb2.InteractRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc_layer.connpy_pb2.InteractResponse" href="#connpy.grpc_layer.connpy_pb2.InteractResponse">InteractResponse</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc_layer.connpy_pb2.InteractResponse.DESCRIPTOR" href="#connpy.grpc_layer.connpy_pb2.InteractResponse.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc_layer.connpy_pb2.ListRequest" href="#connpy.grpc_layer.connpy_pb2.ListRequest">ListRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc_layer.connpy_pb2.ListRequest.DESCRIPTOR" href="#connpy.grpc_layer.connpy_pb2.ListRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc_layer.connpy_pb2.MessageValue" href="#connpy.grpc_layer.connpy_pb2.MessageValue">MessageValue</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc_layer.connpy_pb2.MessageValue.DESCRIPTOR" href="#connpy.grpc_layer.connpy_pb2.MessageValue.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc_layer.connpy_pb2.MoveRequest" href="#connpy.grpc_layer.connpy_pb2.MoveRequest">MoveRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc_layer.connpy_pb2.MoveRequest.DESCRIPTOR" href="#connpy.grpc_layer.connpy_pb2.MoveRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc_layer.connpy_pb2.NodeRequest" href="#connpy.grpc_layer.connpy_pb2.NodeRequest">NodeRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc_layer.connpy_pb2.NodeRequest.DESCRIPTOR" href="#connpy.grpc_layer.connpy_pb2.NodeRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc_layer.connpy_pb2.NodeRunResult" href="#connpy.grpc_layer.connpy_pb2.NodeRunResult">NodeRunResult</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc_layer.connpy_pb2.NodeRunResult.DESCRIPTOR" href="#connpy.grpc_layer.connpy_pb2.NodeRunResult.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc_layer.connpy_pb2.PluginRequest" href="#connpy.grpc_layer.connpy_pb2.PluginRequest">PluginRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc_layer.connpy_pb2.PluginRequest.DESCRIPTOR" href="#connpy.grpc_layer.connpy_pb2.PluginRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc_layer.connpy_pb2.ProfileRequest" href="#connpy.grpc_layer.connpy_pb2.ProfileRequest">ProfileRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc_layer.connpy_pb2.ProfileRequest.DESCRIPTOR" href="#connpy.grpc_layer.connpy_pb2.ProfileRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc_layer.connpy_pb2.ProviderRequest" href="#connpy.grpc_layer.connpy_pb2.ProviderRequest">ProviderRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc_layer.connpy_pb2.ProviderRequest.DESCRIPTOR" href="#connpy.grpc_layer.connpy_pb2.ProviderRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc_layer.connpy_pb2.RunRequest" href="#connpy.grpc_layer.connpy_pb2.RunRequest">RunRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc_layer.connpy_pb2.RunRequest.DESCRIPTOR" href="#connpy.grpc_layer.connpy_pb2.RunRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc_layer.connpy_pb2.ScriptRequest" href="#connpy.grpc_layer.connpy_pb2.ScriptRequest">ScriptRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc_layer.connpy_pb2.ScriptRequest.DESCRIPTOR" href="#connpy.grpc_layer.connpy_pb2.ScriptRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc_layer.connpy_pb2.StringRequest" href="#connpy.grpc_layer.connpy_pb2.StringRequest">StringRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc_layer.connpy_pb2.StringRequest.DESCRIPTOR" href="#connpy.grpc_layer.connpy_pb2.StringRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc_layer.connpy_pb2.StringResponse" href="#connpy.grpc_layer.connpy_pb2.StringResponse">StringResponse</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc_layer.connpy_pb2.StringResponse.DESCRIPTOR" href="#connpy.grpc_layer.connpy_pb2.StringResponse.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc_layer.connpy_pb2.StructRequest" href="#connpy.grpc_layer.connpy_pb2.StructRequest">StructRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc_layer.connpy_pb2.StructRequest.DESCRIPTOR" href="#connpy.grpc_layer.connpy_pb2.StructRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc_layer.connpy_pb2.StructResponse" href="#connpy.grpc_layer.connpy_pb2.StructResponse">StructResponse</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc_layer.connpy_pb2.StructResponse.DESCRIPTOR" href="#connpy.grpc_layer.connpy_pb2.StructResponse.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc_layer.connpy_pb2.TestRequest" href="#connpy.grpc_layer.connpy_pb2.TestRequest">TestRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc_layer.connpy_pb2.TestRequest.DESCRIPTOR" href="#connpy.grpc_layer.connpy_pb2.TestRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc_layer.connpy_pb2.UpdateRequest" href="#connpy.grpc_layer.connpy_pb2.UpdateRequest">UpdateRequest</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc_layer.connpy_pb2.UpdateRequest.DESCRIPTOR" href="#connpy.grpc_layer.connpy_pb2.UpdateRequest.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h4><code><a title="connpy.grpc_layer.connpy_pb2.ValueResponse" href="#connpy.grpc_layer.connpy_pb2.ValueResponse">ValueResponse</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.grpc_layer.connpy_pb2.ValueResponse.DESCRIPTOR" href="#connpy.grpc_layer.connpy_pb2.ValueResponse.DESCRIPTOR">DESCRIPTOR</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.6</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user