Skip to content

Commit

Permalink
Merge pull request #2 from enix/dev
Browse files Browse the repository at this point in the history
New release
  • Loading branch information
lcaflc authored Oct 6, 2023
2 parents e65bd77 + 4a484f3 commit 98ea199
Show file tree
Hide file tree
Showing 4 changed files with 217 additions and 30 deletions.
113 changes: 105 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,25 +1,122 @@
Proxmox VE Control
===

`pvecontrol` software allow management of a cluster proxmox using it's remote API.
![release workflow](https://github.com/enix/pvecontrol/actions/workflows/release.yml/badge.svg?branch=main)
![pypi release](https://img.shields.io/pypi/v/pvecontrol.svg)
![pypi downloads](https://img.shields.io/pypi/dm/pvecontrol.svg)

`pvecontrol` (https://pypi.org/project/pvecontrol/) software allows you to manage a Proxmox VE cluster using it's API with a convenient cli for you shell.

It is designed to easy usage of multiple large clusters and get some detailled informations not easily available on the UI and integrated tools.

`pvecontrol` is based upon [proxmoxer](https://pypi.org/project/proxmoxer/) a wonderfull framework to communicate with Proxmox projects APIs.

Installation
---

Need python 3.7+
The software need Python version 3.7+.

Install python requirements and directly use the script
The easiest way to install it is simply using pip. New versions are automatically published to [pypi](https://pypi.org/project/pvecontrol/) repository. It is recommended to use `pipx` in order to automatically create a dedicated python virtualenv.

```bash
pip3 install -r requirements.txt
python3 src/pvecontrol/pvecontrol.py -h
```shell
pipx install pvecontrol
```

Configuration
---

Configuration is a yaml file in `$HOME/.config/pvecontrol/conig.yaml`. It contains configuration needed by proxmoxer to connect to a cluster. `pvecontrol` curently only use the http API and so work with an pve realm user or token (and also a `@pam` user but is not required).
Configuration is a yaml file in `$HOME/.config/pvecontrol/config.yaml`. It contains configuration needed by proxmoxer to connect to a cluster. `pvecontrol` curently only use the http API and so work with an pve realm user or token (and also a `@pam` user but is not recommended at all).

It is highly recommended to setup a dedicated user for the tool usage. You should not use `root@pam` proxmox node credentials in any ways for production systems because the configuration file is stored in plain text without ciphering on your system.

If you plan to use `pvecontrol` in read only mode to fetch cluster informations you can limit user with only `PVEAuditor` on Path `/` Permissions. This is the minimum permissions needed by `pvecontrol` to work.
For other operations on VMs it is recommended to grant `PVEVMAdmin` on Path `/`. This allows start, stop, migrate, ...

Once you have setup your management user for `pvecontrol` you can generate your configuration file. Default configuration template with all options is available (here)[https://github.com/enix/pvecontrol/blob/dev/src/pvecontrol/config_default.yaml]. Use this file to build your own configuration file or the above exemple:
```yaml
---

clusters:
- name: my-test-cluster
host: 192.168.1.10
user: pvecontrol@pve
password: superpasssecret
- name: prod-cluster-1
host: 10.10.10.10
user: pvecontrol@pve
password: Supers3cUre
```
Usage
---
`pvecontrol` provide a complete *help* message for quick usage:

```shell
$ pvecontrol --help
usage: pvecontrol [-h] [-v] [--debug] -c CLUSTER {clusterstatus,nodelist,vmlist,tasklist,taskget} ...
Proxmox VE control cli.
positional arguments:
{clusterstatus,nodelist,vmlist,tasklist,taskget}
sub-command help
clusterstatus Show cluster status
nodelist List nodes in the cluster
vmlist List VMs in the cluster
tasklist List tasks
taskget Get task detail
options:
-h, --help show this help message and exit
-v, --verbose
--debug
-c CLUSTER, --cluster CLUSTER
Proxmox cluster name as defined in configuration
```

`pvecontrol` works with subcommands for each operation. Each subcommand have it's own *help*:

```shell
$ pvecontrol taskget --help
usage: pvecontrol taskget [-h] --upid UPID [-f]
options:
-h, --help show this help message and exit
--upid UPID Proxmox tasks UPID to get informations
-f, --follow Follow task log output
```

For each operation it is mandatory to tell `pvecontrol` on which cluster from your configuration file you want to operate. So the `--cluster` argument is mandatory.

The simpliest operation on a cluster that allows to check that user is correctly configured is `clusterstatus`:

```shell
$ pvecontrol --cluster my-test-cluster clusterstatus
INFO:root:Proxmox cluster: my-test-cluster
INFO:root:{'nodes': 9, 'id': 'cluster', 'type': 'cluster', 'quorate': 1, 'version': 17, 'name': 'pve-r1az2'}
+-----------+-------+---------+--------------+--------------+
| name | nodes | quorate | allocatedmem | allocatedcpu |
+-----------+-------+---------+--------------+--------------+
| pve-r1az2 | 9 | 1 | 583.5 GiB | 167 |
+-----------+-------+---------+--------------+--------------+
```

If this works, you're ready to go

Development
---

Install python requirements and directly use the script. All the configurations are common with the standard installation.

```shell
pip3 install -r requirements.txt
python3 src/pvecontrol/pvecontrol.py -h
```

This project use *semantic versioning* with [python-semantic-release](https://python-semantic-release.readthedocs.io/en/latest/) toolkit in order to automate release process. All the commits must so use the [Angular Commit Message Conventions](https://github.com/angular/angular/blob/master/CONTRIBUTING.md#-commit-message-format). Repository `main` branch is also protected to prevent any unwanted publish of a new release. All updates must go thru a PR with a review.

---

Default configuration template is available in `src/config_default.yaml`. Use this file to build your own configuration file.
Made with :heart: by Enix (http://enix.io) :monkey: from Paris :fr:.
2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
pyyaml
urllib3
# proxmoxer depdends
proxmoxer>=2.0.0
proxmoxer>=2.0.1
requests
requests_toolbelt
openssh_wrapper
Expand Down
6 changes: 3 additions & 3 deletions src/pvecontrol/config_default.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---

clusters:
- name: cluster1
- name: cluster1
host: 127.0.0.1
user: root
password: default
user: pvecontrol@pve
password: somerandomsecret
126 changes: 108 additions & 18 deletions src/pvecontrol/pvecontrol.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,34 @@ def _get_nodes(proxmox):
nodes.append(node)
return nodes

def _get_node(proxmox, node):
for snode in _get_nodes(proxmox):
logging.debug("NODE: %s",snode)
if snode['node'] == node:
return snode
return None

def _get_vms(proxmox):
vms = []
for node in _get_nodes(proxmox):
for vm in proxmox.nodes(node['node']).qemu.get():
vm = _filter_keys(vm, ['vmid', 'status', 'cpus', 'maxdisk', 'maxmem', 'name'])
vm['vmid'] = int(vm['vmid'])
vm['node'] = node['node']
vm['maxmem'] = naturalsize(vm['maxmem'], binary=True)
vm['maxdisk'] = naturalsize(vm['maxdisk'], binary=True)
vms.append( vm )
logging.debug("VMs: %s"%vms)
return vms

def _get_vm(proxmox, vmid):
vms = _get_vms(proxmox)
for vm in vms:
logging.debug("_get_vm: %s"%vm)
if vm['vmid'] == vmid:
return vm
return None

def _get_node_allocated_mem(proxmox, node, running = True):
# Get the allocated memory for a node
allocated_mem = 0
Expand All @@ -62,6 +90,15 @@ def _get_node_allocated_mem(proxmox, node, running = True):
allocated_mem += config['memory']
return allocated_mem

def _get_node_ressources(proxmox, node):
allocated_mem = _get_node_allocated_mem(proxmox, node['node'])
# We convert from MB to Bytes
node['allocatedmem'] = _convert_to_MB(allocated_mem)
node['allocatedcpu'] = _get_node_allocated_cpu(proxmox, node['node'])
node['maxmem'] = naturalsize(node['maxmem'], binary=True)
node['maxdisk'] = naturalsize(node['maxdisk'], binary=True)
return node

def _get_cluster_allocated_mem(proxmox, running = True):
# Get the allocated memory for a cluster
allocated_mem = 0
Expand Down Expand Up @@ -110,28 +147,66 @@ def action_nodelist(proxmox, args):
# # List proxmox nodes in the cluster using proxmoxer api
nodes = []
for node in _get_nodes(proxmox):
if "maxcpu" not in node: node['maxcpu'] = 0
if "maxmem" not in node: node['maxmem'] = 0
if "maxdisk" not in node: node['maxdisk'] = 0
node = _filter_keys(node, ['node', 'maxcpu', 'status','maxmem','maxdisk'])
allocated_mem = _get_node_allocated_mem(proxmox, node['node'])
# We convert from MB to Bytes
node['allocatedmem'] = _convert_to_MB(allocated_mem)
node['allocatedcpu'] = _get_node_allocated_cpu(proxmox, node['node'])
node['maxmem'] = naturalsize(node['maxmem'], binary=True)
node['maxdisk'] = naturalsize(node['maxdisk'], binary=True)
node = _get_node_ressources(proxmox, node)
nodes.append(node)
_print_tableoutput(nodes, sortby='node')

def action_vmlist(proxmox, args):
# # List proxmox nodes in the cluster using proxmoxer api
vms = []
for node in _get_nodes(proxmox):
for vm in proxmox.nodes(node['node']).qemu.get():
vm = _filter_keys(vm, ['vmid', 'status', 'cpus', 'maxdisk', 'maxmem', 'name'])
vm['node'] = node['node']
vm['maxmem'] = naturalsize(vm['maxmem'], binary=True)
vm['maxdisk'] = naturalsize(vm['maxdisk'], binary=True)
vms.append( vm )
vms = _get_vms(proxmox)
_print_tableoutput(vms, sortby='vmid')

def action_vmmigrate(proxmox, args):
logging.debug("ARGS: %s"%args)
# Migrate a vm to a node
vmid = int(args.vmid)
target = str(args.target)

# Check that vmid exists
vm = _get_vm(proxmox, vmid)
logging.debug("Source vm: %s"%vm)
if not vm:
print("Source vm not found")
sys.exit(1)
# Get source node
node = _get_node(proxmox, vm['node'])
if not node:
print("Source node does not exists")
sys.exit(1)
logging.debug("Source node: %s"%node)

# Check target node exists
target = _get_node(proxmox, target)
if not target:
print("Target node does not exists")
sys.exit(1)
target = _get_node_ressources(proxmox, target)
logging.debug("Target node: %s"%target)
# Check target node a les ressources
# FIXME

# Check que la migration est possible
check = proxmox.nodes(node['node']).qemu(vmid).migrate.get(node= node['node'], target= target['node'])
logging.debug("Migration check: %s"%check)
options = {}
options['node'] = node['node']
options['target'] = target['node']
options['online'] = int(args.online)
if len(check['local_disks']) > 0:
options['with-local-disks'] = int(True)

if not args.dry_run:
# Lancer tache de migration
upid = proxmox.nodes(node['node']).qemu(vmid).migrate.post(**options)
# Suivre la task cree
_print_task(proxmox, upid, args.follow)
else:
print("Dry run, skipping migration")

def action_tasklist(proxmox, args):
tasks = []
for task in proxmox.get("cluster/tasks"):
Expand All @@ -152,14 +227,17 @@ def _print_taskstatus(status):
_print_tableoutput(output)

def action_taskget(proxmox, args):
task = Tasks.decode_upid(args.upid)
_print_task(proxmox, args.upid, args.follow)

def _print_task(proxmox, upid, follow = False):
task = Tasks.decode_upid(upid)
logging.debug("Task: %s", task)
status = proxmox.nodes(task['node']).tasks(task['upid']).status.get()
logging.debug("Task status: %s", status)
_print_taskstatus(status)
log = proxmox.nodes(task['node']).tasks(task['upid']).log.get(limit=0)
logging.debug("Task Log: %s", log)
if status['status'] == 'running' and args.follow:
if status['status'] == 'running' and follow:
lastline = 0
print("log output, follow mode")
while status['status'] == "running":
Expand All @@ -177,12 +255,15 @@ def action_taskget(proxmox, args):
_print_tableoutput([{"log output": Tasks.decode_log(log)}])

def _parser():
## FIXME
## Add version in help output

# Parser configuration
parser = argparse.ArgumentParser(description='Proxmox VE control cli.')
parser = argparse.ArgumentParser(description='Proxmox VE control cli.', epilog="Made with love by Enix.io")
parser.add_argument('-v', '--verbose', action='store_true')
parser.add_argument('--debug', action='store_true')
parser.add_argument('-c', '--cluster', action='store', required=True, help='Proxmox cluster name as defined in configuration' )
subparsers = parser.add_subparsers(help='sub-command help', dest='action_name')
subparsers = parser.add_subparsers(required=True)

# clusterstatus parser
parser_clusterstatus = subparsers.add_parser('clusterstatus', help='Show cluster status')
Expand All @@ -194,6 +275,15 @@ def _parser():
# vmlist parser
parser_vmlist = subparsers.add_parser('vmlist', help='List VMs in the cluster')
parser_vmlist.set_defaults(func=action_vmlist)
# vmmigrate parser
parser_vmmigrate = subparsers.add_parser('vmmigrate', help='Migrate VMs in the cluster')
parser_vmmigrate.add_argument('--vmid', action='store', required=True, type=int, help="VM to migrate")
parser_vmmigrate.add_argument('--target', action='store', required=True, help="Destination Proxmox VE node")
parser_vmmigrate.add_argument('--online', action='store_true', help="Online migrate the VM, default True", default=True)
parser_vmmigrate.add_argument('-f', '--follow', action='store_true', help="Follow task log output")
parser_vmmigrate.add_argument('--dry-run', action='store_true', help="Dry run, do not execute migration")
parser_vmmigrate.set_defaults(func=action_vmmigrate)

# tasklist parser
parser_tasklist = subparsers.add_parser('tasklist', help='List tasks')
parser_tasklist.set_defaults(func=action_tasklist)
Expand Down

0 comments on commit 98ea199

Please sign in to comment.