![vmware vsphere 6.5 host resources deep dive vmware vsphere 6.5 host resources deep dive](http://vcallaway.com/wp-content/uploads/2016/10/dpm2.png)
64/11 = 5.8 thus, it should round up to 6, but it doesn't do this, so I expect other advanced settings are influencing the NUMA client configuration.
![vmware vsphere 6.5 host resources deep dive vmware vsphere 6.5 host resources deep dive](http://virtuallystable.com/wp-content/uploads/2021/02/The-Phoenix-Project-Cover.jpg)
In your scenario, setting it to 11 restricts the scheduler to prefer HT threads for sizing the NUMA client. This is typically recommended for VMs that exceed the memory capacity of a NUMA node while being able to all the vCPUs in a single NUMA node.
![vmware vsphere 6.5 host resources deep dive vmware vsphere 6.5 host resources deep dive](https://i0.wp.com/cloudhat.eu/wp-content/uploads/2022/02/vExpert-5-stars.png)
The advice of setting in the book is to propagate the virtual NUMA topology to the guest OS. With this theory, your VM of 64 vCPUs should be distributed across two NUMA clients, each NUMA client grouping 32 vCPUs. With PreferHT, the NUMA scheduler takes the SMT capabilities into account, and therefore, each NUMA node can now accept NUMA clients similar to the HT thread count. In your situation, there are four NUMA nodes, 22 cores with each 384 GB of memory (assuming the DIMMs are equally distributed across sockets and offer the same capacity). PreferHT is used to consolidate the vCPUs as much as possible and create the fewest number of NUMA clients. Why the VM has 7 virtual nodes instead of 6? So, in this case the use of CoresPerSocket is effective? Then I should set 2 Sockets x 32 coresPerSocket? Option that frankly I haven't seen available in the VM Settings windowģ. This leverages the OS and application LLC optimizations the most " " If preferHT is used, we recommend aligning the cores per socket to the physical CPU package layout. I knew that in 6.5 the coresPerSocket setting was decoupled from the Socket setting, so it does not really matter anymore if you set 12 sockets x 1 core or 1 Socket x 12 cores (unless licenses rectrictions are in place). following which criteria do I adjust the value so that the preferHT setting is enforced correctly?Ģ. So, to recap my question to the experienced admins are the following:ġ. Specially considering that homeNode 3 is not used at all. Shall I set it to 44 as it is the max number of logical cores in a physical socket? Or shall I disable it and let the system do its best decision? If yes how do I disable it? This is the current layout of the cpu resources of the vm:Īlthough performances have improved I’m not happy with the distribution of the cores. This is another thing I don't understand.įollowing which criteria do I adjust value? According to Virtual NUMA Controls I should get 6 virtual nodes by dividing 64 by 11,but I see the VM has 7. I read the above after I did the initial changes to the VM and I have now noticed that its value is 11. This setting overrides the =TRUE setting” “Please remember to adjust the setting in the VM if it is already been powered-on once.
![vmware vsphere 6.5 host resources deep dive vmware vsphere 6.5 host resources deep dive](http://vcallaway.com/wp-content/uploads/2018/04/vcha2.jpg)
#VMWARE VSPHERE 6.5 HOST RESOURCES DEEP DIVE HOW TO#
However while the VMWare KB 2003582 states how to implement the preferHT setting it does not mention something Frank Denneman did say in his book: By disabling the Hot add CPU feature and by adding the preferHT set to True I increased performance quite a lot and I expected to see cores spread onto two physical sockets. According to Frank Denneman's book vSphere 6.5 Host Resources Deep Dive I aimed at keeping the VPD onto as little psockets as possible. ESXi 6.5 are HP DL560 with 4 sockets (22 cores each + HT) and 1.5 TB RAM. I have a VM with 64 vCPUs and 512GB RAM, it’s a massive db.