PDA

View Full Version : 80.10 problems on ESXi 6.5



Dom2201
2018-02-08, 07:50
Hi everyone,

I had some boot up problems with a checkpoint r80.10 on a ESXi Server 6.5.

Now I want to tell you my problem and how I fixed it.

1365

I made several installations on my ESXi Server (v6.5) (with ISO file, and with OVF Templates).
I found out, that if I install the vm container in compatibility mode 6.x (vm-version greater than 10) I have this problem as shown in the picture.

##################################
pci_mmcfg_init marking 256MB space uncacheable
sda: assuming drive cache: write through
sda: assuming drive cache: write through
##################################

This problem is a known Red Hat bug (https://bugzilla.redhat.com/show_bug.cgi?id=581933) but is not solved on checkpoint I think.

If I install or configure the vm container in compatibility mode 5.x (vm-version 10 and lower) I have no problem, and the checkpoint boots up and works perfectly.
For everyone who runs a checkpoint on a ESXi Server 6.x donīt change the compatibility mode of the vm container to 6.x. because this will bring up this problem at bootup of the system.

If the checkpoint boots up with this failure, you have no possibility to boot up in a maintenance mode (the boot up timer is not shown), and the boot up procedure lasts longer.

jflemingeds
2018-02-08, 10:12
Hi everyone,

I had some boot up problems with a checkpoint r80.10 on a ESXi Server 6.5.

Now I want to tell you my problem and how I fixed it.

1365

I made several installations on my ESXi Server (v6.5) (with ISO file, and with OVF Templates).
I found out, that if I install the vm container in compatibility mode 6.x (vm-version greater than 10) I have this problem as shown in the picture.

##################################
pci_mmcfg_init marking 256MB space uncacheable
sda: assuming drive cache: write through
sda: assuming drive cache: write through
##################################

This problem is a known Red Hat bug (https://bugzilla.redhat.com/show_bug.cgi?id=581933) but is not solved on checkpoint I think.

If I install or configure the vm container in compatibility mode 5.x (vm-version 10 and lower) I have no problem, and the checkpoint boots up and works perfectly.
For everyone who runs a checkpoint on a ESXi Server 6.x donīt change the compatibility mode of the vm container to 6.x. because this will bring up this problem at bootup of the system.

If the checkpoint boots up with this failure, you have no possibility to boot up in a maintenance mode (the boot up timer is not shown), and the boot up procedure lasts longer.

Did you try setting acpi_mcfg_max_pci_bus_num=on in your menu.1st file?

Dom2201
2018-02-08, 11:30
Did you try setting acpi_mcfg_max_pci_bus_num=on in your menu.1st file?

Where can I find this file?

No I didnīt try this. But I didnīt want to change this "deep" settings. In my opinion if checkpoint says that they support ESXi 6.5 then it has to run without changing this type of setting.

Checkpoint said: "We will review this information and consider documentation into SK. "

jflemingeds
2018-02-08, 12:34
Where can I find this file?

No I didnīt try this. But I didnīt want to change this "deep" settings. In my opinion if checkpoint says that they support ESXi 6.5 then it has to run without changing this type of setting.

Checkpoint said: "We will review this information and consider documentation into SK. "

Something like /boot/grub/menu.1st

The red hat bug you posted says it needs to be enabled (which doesn’t seem to be default for red hat). What I don’t know is if the code to support it is there or not.

Dom2201
2018-02-09, 04:34
Something like /boot/grub/menu.1st

The red hat bug you posted says it needs to be enabled (which doesn’t seem to be default for red hat). What I don’t know is if the code to support it is there or not.


I added the line (acpi_mcfg_max_pci_bus_num=on) you suggested in the /boot/grub/menu.1st file. But no changes at bootup.

abusharif
2018-02-09, 05:22
I have that issue as well, but as the only, afaik, downside is a tad slower boot up sequence I never bothered trying to fix it.
Thanks for the tip!

jflemingeds
2018-02-09, 08:06
I have that issue as well, but as the only, afaik, downside is a tad slower boot up sequence I never bothered trying to fix it.
Thanks for the tip!

I’m not sure it only effects boot up. None cachable ram is bad. The big posted made it sound like just about any data structure could end up in that range and have a performance impact.

Of course real nerds use kvm now. ;)

jflemingeds
2018-02-09, 08:08
I added the line (acpi_mcfg_max_pci_bus_num=on) you suggested in the /boot/grub/menu.1st file. But no changes at bootup.

Can you show where you did so? Just making sure you put it in the right spot. Should be on the kernel line.

abusharif
2018-02-09, 10:17
i am not able to reach my esx at the moment, but is it possible to "downgrade" compatibility mode/version of the VM (6>5) on the fly without need of re-installing?

Dom2201
2018-02-12, 09:38
i am not able to reach my esx at the moment, but is it possible to "downgrade" compatibility mode/version of the VM (6>5) on the fly without need of re-installing?


I tried several things on my ESXi 6.5 but I couldnīt find something to downgrade the VM.

Now I made a snapshot of the "problem VM" and import it to a new R80.10 VM container without upgrading the VM to ESXi 6.x.

Dom2201
2018-02-12, 09:44
Can you show where you did so? Just making sure you put it in the right spot. Should be on the kernel line.

I donīt understand the question. I opened the file /boot/grub/menu.1st and added this line acpi_mcfg_max_pci_bus_num=on.

p.s. the file was empty as I opened it.

But by the way, I donīt think it could be the right way to change Kernel parameters...

jflemingeds
2018-02-12, 11:13
I donīt understand the question. I opened the file /boot/grub/menu.1st and added this line acpi_mcfg_max_pci_bus_num=on.

p.s. the file was empty as I opened it.

But by the way, I donīt think it could be the right way to change Kernel parameters...

That file should not be empty. This is from a r80 open server mgmt server for the normal section of the boot loader.



title Start in normal mode
root (hd0,0)
kernel /vmlinuz ro vmalloc=256M noht root=/dev/vg_splat/lv_current panic=15 console=SERIAL crashkernel=64M@16M 3 quiet
initrd /initrd


in your case it would be



title Start in normal mode
root (hd0,0)
kernel /vmlinuz ro vmalloc=256M noht root=/dev/vg_splat/lv_current panic=15 console=SERIAL crashkernel=64M@16M 3 quiet acpi_mcfg_max_pci_bus_num=on
initrd /initrd

Bob_Zimmerman
2018-02-12, 12:00
i am not able to reach my esx at the moment, but is it possible to "downgrade" compatibility mode/version of the VM (6>5) on the fly without need of re-installing?

Yes, but the VM needs to be powered off for the change. You're just changing the VM hardware version. I don't know if VMware's Flash UI provides an interface to do this, but it's easy enough with other tools. It's just changing a number in the vmx file describing the VM.

abusharif
2018-02-12, 12:54
Yes, but the VM needs to be powered off for the change. You're just changing the VM hardware version. I don't know if VMware's Flash UI provides an interface to do this, but it's easy enough with other tools. It's just changing a number in the vmx file describing the VM.

Thanks Zimmie,

Correct, I wasn't able to find it in webui!
I've changed it now in VMX file
virtualHW.version = "10"

Dom2201
2018-03-05, 18:39
Thanks Zimmie,

Correct, I wasn't able to find it in webui!
I've changed it now in VMX file
virtualHW.version = "10"

Hi, this worked? If yes it is a great „easy“ solution.



Gesendet von iPhone mit Tapatalk

abusharif
2018-03-06, 02:06
Hi, this worked? If yes it is a great „easy“ solution.



Gesendet von iPhone mit Tapatalk

Yes sir!

Bob_Zimmerman
2018-07-24, 12:33
That file should not be empty. This is from a r80 open server mgmt server for the normal section of the boot loader.



title Start in normal mode
root (hd0,0)
kernel /vmlinuz ro vmalloc=256M noht root=/dev/vg_splat/lv_current panic=15 console=SERIAL crashkernel=64M@16M 3 quiet
initrd /initrd


in your case it would be



title Start in normal mode
root (hd0,0)
kernel /vmlinuz ro vmalloc=256M noht root=/dev/vg_splat/lv_current panic=15 console=SERIAL crashkernel=64M@16M 3 quiet acpi_mcfg_max_pci_bus_num=on
initrd /initrd


I think I spotted the disconnect. The file is /boot/grub/menu.lst (Lima Sierra Tango), while Dom opened /boot/grub/menu.1st (One Sierra Tango).

I don't think the "assuming drive cache: write through" is the actual problem. I removed the 'quiet' boot option from a VM and rebooted. It seemed to wait a while probing 30 ATA channels. Needs more research.

Dom2201
2018-07-25, 04:48
Hi Zimmerman,

thank you for the hint, can you explain me how to disable the "quiet boot option".


. I removed the 'quiet' boot option from a VM and rebooted.



Thanks

Bob_Zimmerman
2018-07-25, 13:34
From /boot/grub/menu.lst on an R77.30 system:

title Start in 64bit normal mode
root (hd0,0)
kernel /vmlinuz-x86_64 ro root=/dev/vg_splat/lv_current vmalloc=256M noht panic=15 console=SERIAL crashkernel=128M@16M 3 quiet
initrd /initrd-x86_64
On the line starting with "kernel", you just remove the word "quiet" from the end. Keep in mind, this doesn't actually fix any problem. It just gets the system to print boot messages to the console.

Dom2201
2018-07-31, 01:13
Hi Zimmie,

do I have to reload the GRUB, because I have tried this on a 80.10 and I donīt see the boot messages.

If I have to reload the GRUB, do you know the commands I need to use?

(I have a learning environment on VMs to do some experiments :cool: )


Greetz Dom