-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
m2 mac-mini self-hosted fails to see parallels plugin #75
Comments
Thanks for reporting this @Stromweld! Can you share your workflow config as well as a minimal template that you use with this Action? |
Here is the workflow config edited for length. The full version can be found here https://github.com/chef/bento ---
on:
pull_request:
jobs:
aarch64:
runs-on: [self-hosted, ARM64, parallels]
strategy:
fail-fast: false
matrix:
os:
- almalinux-8
provider:
- parallels-iso
steps:
- name: Checkout
uses: actions/checkout@main
- name: Setup Packer
if: steps.verify-changed-files.outputs.files_changed == 'true'
uses: hashicorp/setup-packer@main
with:
version: latest
- name: Packer Init
env:
PACKER_GITHUB_API_TOKEN: "${{ secrets.PACKER_GITHUB_API_TOKEN }}"
run: packer init -upgrade packer_templates
- name: Packer build
run: packer build -timestamp-ui pkr-builder.pkr.hcl HCL Template: pkr-builder.pkr.hcl packer {
required_version = ">= 1.7.0"
required_plugins {
parallels = {
version = ">= 1.0.1"
source = "github.com/hashicorp/parallels"
}
vagrant = {
version = ">= 1.0.2"
source = "github.com/hashicorp/vagrant"
}
vmware = {
version = ">= 0.0.1"
source = "github.com/stromweld/vmware" # TODO: switching to stromweld repo for fix to vmware tools for fusion 13 till official fix is in place https://github.com/hashicorp/packer-plugin-vmware/issues/109
}
}
}
locals {
scripts = [
"${path.root}/scripts/rhel/update_dnf.sh",
"${path.root}/scripts/_common/motd.sh",
"${path.root}/scripts/_common/sshd.sh",
"${path.root}/scripts/_common/vagrant.sh",
"${path.root}/scripts/_common/virtualbox.sh",
"${path.root}/scripts/_common/vmware_rhel.sh",
"${path.root}/scripts/_common/parallels-rhel.sh",
"${path.root}/scripts/rhel/cleanup_dnf.sh",
"${path.root}/scripts/_common/minimize.sh"
]
}
source "parallels-iso" "vm" {
guest_os_type = "centos"
parallels_tools_flavor = "lin-arm"
parallels_tools_mode = "upload"
prlctl = [
["set", "{{ .Name }}", "--3d-accelerate", "off"],
["set", "{{ .Name }}", "--videosize", "16"]
]
prlctl_version_file = ".prlctl_version"
boot_command = ["<wait><up>e<wait><down><down><end><wait> inst.text inst.ks=http://{{ .HTTPIP }}:{{.HTTPPort }}/rhel/8ks.cfg <leftCtrlOn>x<leftCtrlOff>"]
boot_wait = "10s"
cpus = 2
communicator = "ssh"
disk_size = 65536
http_directory = "${path.root}/http"
iso_checksum = "file:https://repo.almalinux.org/almalinux/8/isos/aarch64/CHECKSUM"
iso_url = "https://repo.almalinux.org/almalinux/8/isos/aarch64/AlmaLinux-8.7-update-1-aarch64-minimal.iso"
memory = 2048
output_directory = "packer-almalinux-8.7-aarch64-parallels"
shutdown_command = "echo 'vagrant' | sudo -S /sbin/halt -h -p"
shutdown_timeout = "15"
ssh_password = "vagrant"
ssh_port = 22
ssh_timeout = "60m"
ssh_username = "vagrant"
vm_name = "almalinux-8.7-aarch64"
}
build {
sources = source.parallels-iso.vm
# Linux Shell scipts
provisioner "shell" {
environment_vars = [
"HOME_DIR=/home/vagrant",
"http_proxy=${var.http_proxy}",
"https_proxy=${var.https_proxy}",
"no_proxy=${var.no_proxy}"
]
execute_command = "echo 'vagrant' | {{ .Vars }} sudo -S -E sh -eux '{{ .Path }}'"
expect_disconnect = true
scripts = local.scripts
}
# Convert machines to vagrant boxes
post-processor "vagrant" {
compression_level = 9
output = "${path.root}/${var.os_name}-${var.os_version}-${var.os_arch}.{{ .Provider }}.box"
}
} |
Thanks for the additional context @Stromweld. We don't test against self-hosted runners at this point, so this kind of input is useful. You mention
I'm curious, what happens if you remove the Brew-provided version of Packer and use just Also curious: For the brew version, are you pulling the upstream one or are you using https://github.com/hashicorp/homebrew-tap? |
I am pulling the upstream packer with brew install. I'm currently running on self hosted runner for Arm CPU architecture builds via packer. |
Hi @Stromweld,
The most upper part of this stream would be the HashiCorp-provided and signed pre-built The version provided by Brew is compiled on your machine during install-time and isn't the same binary we offer. That aside, if you want to help debug this issue, I'd like you to uninstall any brew-installed versions of |
Expected Behavior
packer build to provision server and start build process
Current Behavior
When using the setup-packer@v2.0.0 or main packer is installed and packer init installs the parallels plugin even though the server already has the plugin. Then during the build job it fails with an error saying that parallels plugin is not found and that packer init needs to be ran. Packer is also locally installed on the self-hosted runner through home brew and If I remove the setup-packer action step the build runs as expected. The same job setup is also done on an Intel Mac Mini without any issues.
Here's current output of job run
Steps to Reproduce
M2 Mac Mini with self-hosted github actions runner installed and running as a service.
Homebrew install of Parallels, Virtualbox, Vmware-Fusion, qemu, packer
Clone of Bento project at github.com/chef/bento
Update ci.yml in .github > workflows directory to enable an arm build job
create a pr and watch job run
The text was updated successfully, but these errors were encountered: