The script shields the caller from complexities involved in starting virt-v2v on oVirt/RHV host. It daemonizes to the background and monitors the progress of the conversion, providing the status information in a state file. This allows for asynchronous conversion workflow.
The expected usage is as follows:
-
wrapper start: client runs the wrapper; at the moment there are no command line arguments and everything is configured by JSON data presented on stdin.
-
initialization: wrapper read JSON data from stdin, parses and validates the content; based on the situation it may also change the effective user to a non-root account
-
daemonization: wrapper writes to stdout simple JSON containing paths to wrapper log file (
wrapper_log
), virt-v2v log file (v2v_log
), state file (state_file
) that can be used to monitor the progress and throttling file (throttling_file
) that can be used to apply resource limiting; after that it forks to the background -
conversion: finally, virt-v2v process is executed; wrapper monitors its output and updates the state file on a regular basis
-
finalization: when virt-v2v terminates wrapper updates the state file one last time and exits
This section describes various keys understood by the wrapper in JSON on input. Keys are mandatory unless explicitly stated otherwise.
General information:
-
vm_name
: name of the VM to import. In case ofssh
transport method this is an URI containing the host and path to the VM files, e.g.ssh://root@1.2.3.4/vmfs/volumes/datastore/tg-mini/tg-mini.vmx
. -
output_format
: one ofraw
, orqcow2
; default israw
if not specified -
transport_method
: type of transport to use; supported methods aressh
andvddk
. -
daemonize
: boolean declaring whether to daemonize virt-v2v or not (default istrue
). When virt-v2v is not daemonized it is not started in transient systemd unit, that means throttling options are not available. On the other hand it allows one to run the conversion in environment without systemd (e.g. container). For Kubevirt container virt-v2v is never daemonized no matter what this option is set to.
For vddk
the following keys need to be specified:
-
vmware_uri
: libvirt URI of the source hypervisor -
vmware_password
: password used when connecting to the source hypervisor -
vmware_fingerprint
: fingerprint of SSL certificate on the source hypervisor (also called thumbprint)
For ssh
method there are no other information necessary. Optionaly the
following can be specified:
ssh_key
: optional, private part of SSH key to use. If this is not provided then keys in ~/.ssh directory are used.
Output configuration: reffer to the section Output configuration below.
Miscellaneous:
-
source_disks
: optional key containing list of disks in the VM; if specified it is used to initialize progress information in the state file -
network_mappings
: optional key containing list of network mappings; if specified, it is used to connect the VM's NICs to the destination networks during the conversion using virt-v2v--bridge
option. -
virtio_win
: optional key containing path to virtio-win ISO image; this is useful for installing Windows drivers to the VM during conversion. It may be either absolute path or only a filename in which case path to ISO domain is auto-detected. -
install_drivers
: optional key whether to install Windows drivers during conversion, default isfalse
. Ifinstall_drivers
istrue
andvirtio_win
is not specified, wrapper attempts to automatically select best ISO from the ISO domain. Note that when no ISO is found this does not lead to failed conversion. Just no drivers will be installed in this case. This is different from situation whenvirtio_win
is specified but points to non-existing ISO, which is an error. -
allocation
: optional key specifying the allocation type to use; possible values arepreallocated
andsparse
. -
throttling
: optional key with initial throttling option (see below) -
luks_keys_vault
: optional key to specify a JSON file containing the LUKS keys for encrypted devices (see below).
Example:
{
"export_domain": "storage1.example.com:/data/export-domain",
"vm_name": "My_Machine",
"transport_method": "vddk",
"vmware_fingerprint": "1A:3F:26:C6:DC:2C:44:88:AA:33:81:3C:18:6E:5D:9F:C0:EE:DF:5C",
"vmware_uri": "esx://root@10.2.0.20?no_verify=1",
"vmware_password": "secret-password",
"source_disks": [
"[dataStore_1] My_Machine/My_Machine_1.vmdk",
"[dataStore_1] My_Machine/My_Machine_2.vmdk"
],
"network_mappings": [
{
"source": "networkA1",
"destination": "networkA2"
},
{
"source": "networkX1",
"destination": "networkX2"
}
],
"virtio_win": "virtio-win-0.1.141.iso",
"luks_keys_vault": "/path_to/luks_key_vault.json"
}
There is no configuration key specifying the type of output. Rather the output method is chosen depending on the keys present. If there are keys defining multiple output modes the first one is selected base on the order of precedence. The order is following:
-
oVirt API upload
-
export domain
To select oVirt API upload method add rhv_url
to the configuraton. Together
with rhv_url
some other keys need to be also specified.
-
rhv_url
: URL to the oVirt API endpoint. -
rhv_password
: password used to authorize to API -
rhv_cluster
: name of the target cluster -
rhv_storage
: name of the target storage domain -
rhv_cafile
: optional on VDSM host; path to the CA certificate. If the key is not specified wrapper looks for the certificate at the default VDSM location. -
insecure_connection
: optional, whether to verify peer certificates. For now used only when connecting to oVirt/RHV. Default isfalse
.
Example:
{
...
"rhv_url": "https://ovirt.example.com/ovirt-engine/api",
"rhv_password": "secret-password",
"rhv_cluster": "Default",
"rhv_storage": "data",
"rhv_cafile": "/etc/pki/vdsm/certs/cacert.pem",
"insecure_connection": true
}
To request conversion into export domain add the following key to the configuration:
export_domain
: specify path to NFS share in the format<hostname>:<path>
State file is a JSON file. Its content changes as the conversion goes through various stages. With it also the keys present in the file.
Once virt-v2v is executed the state file is created with the following keys:
-
started
: with valuetrue
-
pid
: the process ID of virt-v2v. This can be used to kill the process and terminate the conversion. In this case, once virt-v2v terminates (with non-zero return code) the wrapper immediately terminates too. -
disks
: array of progress per each disk. The value is either empty list or a list of objects initialized fromsource_disks
passed to the wrapper. If nosource_disks
is specified, thedisks
list is constructed incrementally during the conversion process. -
disk_count
: the number of disks that will be copied. Initially zero or number of disks insource_disks
. When virt-v2v starts copying disks, the value is updated to match the count of disks virt-v2v will actually copy. Note that the values does not have to represent the length ofdisks
array! Ifsource_disks
is not specified or contains invalid values length ofdisks
can be smaller or larger thandisk_count
.
When virt-v2v gets past the initialization phase and starts copying disks the
wrapper updates the progress for each disk in the disks
list. Each item in
the list contains the following keys:
-
path
: the path description of the disk as the backend sees it -
progress
: the percentage of the disk copied, in the range from 0 to 100 (as numeric)
When virt-v2v finishes the state is updated with the following keys:
-
return_code
: return code of virt-v2v process. As usual 0 means the process terminated successfully and any non-zero value means an error. Note however, that the value should not be used to check if conversion succeeded or failed. (See below.) -
vm_id
: unique ID of the newly created VM. Available in OSP and RHV.
Right before the wrapper terminates it updates the state with:
-
finished
: with valuetrue
-
failed
: with valuetrue
if the conversion process failed. If everything went OK, this key is not present. Existence of this key is the main way how to check whether the conversion succeeded or not.
It is possible to throttle resources used by the conversion. At the moment one can limit CPU and network bandwidth. Before wrapper detaches to background
Example of throttling file content:
{
"cpu": "50%",
"network": "1048576"
}
This will assign half of single CPU to the process and limit network bandwidth to 1 MB/s.
Limits read from the throttling file are added to those already in place. That means one does not strictly need to include all the possible limits in the file. Although it is suggested to do so. Otherwise some information can be lost if multiple changes occur to the throttling file before the wrapper manages to read them.
To remove a limit one should assign value unlimited
.
Usage is expressed in percents of single CPU time.
From systemd.resource-control(5):
Assign the specified CPU time quota to the processes executed. Takes a
percentage value, suffixed with "%". The percentage specifies how much CPU
time the unit shall get at maximum, relative to the total CPU time
available on one CPU. Use values > 100% for allotting CPU time on more than
one CPU. This controls the "cpu.max" attribute on the unified control group
hierarchy and "cpu.cfs_quota_us" on legacy.
For example, to assign half of one CPU use "50%" or to assign two CPUs use "200%". To remove any limit on CPU one can pass the string "unlimited".
Due to design of cgroup filter on tc the network is limited on output only. (Input is limited only indirectly and is unreliable.)
The limit is specified in bytes per seconds without any units. E.g. "12345". Note that despite being a number it should be passed as string in JSON. To remove any limit one can pass the string "unlimited".
virt-v2v-wrapper always looks for a default path:
${HOME}/.v2v_luks_keys_vault.json
. The file MUST NOT be readable by group
or other.
In the file, the devices names are not the actual names of the devices inside the virtual machine. They mainly represent the order: /dev/sda means that it is the first disk, /dev/sda1 means that it is the first encrypted partition on the first disk.
Example LUKS keys file:
{
"my_vm_1": [
{
"device": "/dev/sda1",
"key": "secret11"
},
{
"device": "/dev/sda2",
"key": "secret12"
}
],
"my_vm_2": [
{
"device": "/dev/sda1",
"key": "secret11"
}
]
}