Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bhyve: update VM applies different rules to memory than create VM #69

Open
jzinkweg opened this issue Jun 23, 2020 · 0 comments
Open

bhyve: update VM applies different rules to memory than create VM #69

jzinkweg opened this issue Jun 23, 2020 · 0 comments

Comments

@jzinkweg
Copy link

Resizing a bhyve VM to a package with more storage but identical memory/swap triggers a re-calculation of max_physical_memory which can also cause an override of the configured swap size.

The "create" endpoints allows us to create a bhyve VM with swap < ram + 1024 + 256:

[root@headnode (Office1) ~]# sdc sdc-vmapi /vms/1f20290b-3220-4f3b-825a-eaf096c83ea8?fields=brand,ram,max_swap,quota,max_physical_memory
HTTP/1.1 200 OK
Connection: close
Content-Type: application/json
Content-Length: 81
Date: Tue, 23 Jun 2020 13:58:59 GMT
Server: VMAPI/9.9.2
x-request-id: 3fdb1c7f-7333-414a-bca7-18ce50bab99e
x-response-time: 13
x-server-name: 0cd87743-e021-461a-9809-c1a2e69fe180

{
  "brand": "bhyve",
  "ram": 1024,
  "max_swap": 2048,
  "quota": 1,
  "max_physical_memory": 1280
}

We can then update this VM to a package with more storage:

[root@headnode (Office1) ~]# sdc sdc-vmapi /vms/1f20290b-3220-4f3b-825a-eaf096c83ea8?action=update -d '{"billing_id":"61c8ddcc-5f08-c1e2-bab6-e41e46a208d4"}'

Job parameters as logged:

{
  "payload": {
    "billing_id": "61c8ddcc-5f08-c1e2-bab6-e41e46a208d4",
    "brand": "bhyve",
    "cpu_cap": 200,
    "max_lwps": 1000,
    "ram": 1024,
    "max_swap": 2048,
    "flexible_disk_size": 30720,
    "vcpus": 2,
    "zfs_io_priority": 20,
    "cpu_shares": 8
  },
  "subtask": "resize",
  "task": "update",
  "vm_uuid": "1f20290b-3220-4f3b-825a-eaf096c83ea8",
  "vm_brand": "bhyve",
  "owner_uuid": "b8989522-71fe-6ab5-9a29-a3b06acf0ca5",
  "server_uuid": "36373131-3635-435a-3233-323930333831",
  "last_modified": "2020-06-23T13:58:58.000Z",
  "x-request-id": "c9050ce1-7643-4254-92c4-20cfdabe78c2",
  "current_ram": 1024,
  "current_quota": 1
}

However, this also causes changes max_swap and max_physical_memory:

[root@headnode (Office1) ~]# sdc sdc-vmapi /vms/1f20290b-3220-4f3b-825a-eaf096c83ea8?fields=brand,ram,max_swap,max_physical_memory,flexible_disk_size
HTTP/1.1 200 OK
Connection: close
Content-Type: application/json
Content-Length: 98
Date: Tue, 23 Jun 2020 14:02:46 GMT
Server: VMAPI/9.9.2
x-request-id: 72ddedb7-d83f-402e-ab39-e3e5876f942d
x-response-time: 64
x-server-name: 0cd87743-e021-461a-9809-c1a2e69fe180

{
  "brand": "bhyve",
  "ram": 1024,
  "max_swap": 2304,
  "max_physical_memory": 2304,
  "flexible_disk_size": 30720
}

I've narrowed this down to max_physical_memory parameter being converted into 'ram':
https://github.com/joyent/sdc-vmapi/blob/6cdddbc034bd4b25ed7eef29a7762e3b01485d2c/lib/common/validation.js#L1512
...after which vmadm recalculates it based on ram + overhead:
https://github.com/joyent/smartos-live/blob/2e4a94e6d9dd3be80fb70027485acd1192d9bb22/src/vm/node_modules/VM.js#L4536
...and then the swap is increased to match the new value:
https://github.com/joyent/smartos-live/blob/2e4a94e6d9dd3be80fb70027485acd1192d9bb22/src/vm/node_modules/VM.js#L4701

We can work around the changes is in swap by updating the packages to use ram + 1024 + 256, but I'm not sure what/if the impact is of the max_physical_memory changing.

NB: documentation recommends setting swap to at least 2x ram, this is insufficient in this case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant