can someone point me to the upgrade process?
thanks
Greg
can someone point me to the upgrade process?
thanks
Greg
Hello cwen
Welcome to vSphere and Communities.
There are a number of methods of preventing different types of access but what needs to be applied (and where) to harden the VMs depends on a number of things - e.g. blocking ALL file transfer should just be a case of configuring firewall/blocking ports on the Guest-OS level but settings such as disabling copy+paste are at the VM level using the isolation configurations:
Security Considerations for Configuring VMware Tools
Disabling shared folder with ESXi host is perhaps another consideration (depending on what you are trying to achieve):
Disable VMware Shared Folders Sharing Host Files to the Virtual Machine
Bob
To anyone else googling this and looking for solution, Install this VIB and reboot.
Hi Current vCenter version is 6.0
Please help me should we use different OEM servers in same cluster.
Like HP we are using since 4 years and Lenovo/Dell we are buying. We need to add HP with Lenovo or Dell in same cluster.
Pros:
Cons:
We recently upgraded esxi 5.5 U3 to esx 6.5 U2 with cisco customized image on C240-M4S Server. We first upgrade cisco firmware from 2.0(6) to 4.0(1c) and then esxi host upgrade from 5.5 u3 to 6.5 U2.(Please find the attached text to know Driver and FW details before and after upgrade)
After upgrade, hosts are going not responding state/frozen state where in esxi hosts are reachable via PING over network, but unable to re-connect host back to vCenter.
During host not responding state ,we can login into putty with multiple session ,however we can’t see/run any commands (like, if df- h, to view logs under cat /var/log ) .When we ran df-h, hosts won’t display anything, gets struck until we close putty session and then can re-connect .
During host not responding state, vms continue to be running, but we can’t migrate those vms into another host and also we are unable to manage those vms via vCloud panel .
We have to reboot host to bring back host and then will connect to vcenter .
We working with Vmware and Cisco since from 3 weeks ,no resolution yet .
We can see lot of Valid sense data: 0x5 0x24 0x0 logs in vmkernel.logs and VMware suspect something with the LSI MegaRAID (MRAID12G) diver. So Vmware asked to contact hardware vendor to check hardware/firmware issues and LSI issues as well
2019-02-18T19:51:27.802Z cpu20:66473)ScsiDeviceIO: 3001: Cmd(0x439d48ebd740) 0x1a, CmdSN 0xea46b from world 0 to dev "naa.678da6e715bb0c801e8e3fab80a35506" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0
This command failed 4234 times on "naa.678da6e715bb0c801e8e3fab80a35506"
Display Name: Local Cisco Disk (naa.678da6e715bb0c801e8e3fab80a35506)
Vendor: Cisco | Model: UCSC-MRAID12G | Is Local: true | Is SSD: false
Cisco did not see any issues with Server /hardware after analyzing Tech support logs and also we performed Cisco diagnostics test on few servers,all components tests/ checks looks good .Only one recommendation given by cisco is to change Power Management policy from balance to High Performance under esxi host->configure->Hardware->Power Mgmt->Active policy ->High Performance
Can someone help me to find cause/fix .
Thanks in advance
all components tests/ checks looks good
Hi, yes you can!
It's best if the new servers have roughly the same performance (at least similar amount of RAM) so that if the most powerful host fails, others have enough performance to accomodate all VMs from the failed host.
And also - make sure the new hosts have CPUs from the same family as the old ones. Then before adding new hosts to the cluster, set up enhanced vMotion compatibility (EVC) on the cluster. This arrangement allows you to perform vMotion across old and new hosts. Failing to do this could cause problems in the future, eg. when putting a host in maintenance mode. Remember, you need to set the EVC baseline before you add the new hosts :-)
Best regards, Pavel
After upgrade, hosts are going not responding state/frozen state where in esxi hosts are reachable via PING over network, but unable to re-connect host back to vCenter.
What's the version of vCenter?
We can see lot of Valid sense data: 0x5 0x24 0x0 logs in vmkernel.logs and VMware suspect something with the LSI MegaRAID (MRAID12G) diver. So Vmware asked to contact hardware vendor to check hardware/firmware issues and LSI issues as well
What's the exact product name of the LSI controller and the Cisco PID for it?
//EDIT
In addition you can grep through the vmkernel.log looking for lsi_mr3 and megaraid_sas driver issues:
grep 'lsi_mr3\|megaraid_sas' /var/log/vmkernel.log
Especially interesting are error events like these:
megaraid_sas: Event : Controller encountered a fatal error and was reset
megaraid_sas: Reset successful.
Thanks for your reply .
Please find below details
Vcenter Build number is 9451637 and version vCenter Server 6.5 U2c
ESXi Build Number is 10884925 and version is ESXi 6.5 P03
Product Name : Cisco 12 G SAS Modular RAID Controller
Product ID: LSI Logic
Product PID: UCSCMRAID12G
Thanks
Hello,
So I am a small developer and was looking at buying possibly a gen 8 or 9 HP Proliant dl360. I am seeking one to run a max of 15 instances, this all because I went from a affordable 15$ a month on instances on the cloud to about 95$/month. But a couple of my instances are heavy resource users so I figure its cheaper buying a slightly used server and setting something up physically. My question is can someone help me find out what ESXI i'd need to run and also how much cost would I be looking at?
I did however do some testing with ESXI 4.0 with my old poweredge 2950 but it ran like garbage with newer OS's. I got the jist of VM Ware but still would like some knowledgeable VMWare users to let me know what i'd be looking for.
Also like I said I am only a Dev of 1 so I dont need any crazy features just basics. Able to run linux and windows server OS.
Thanks in Advanced!
I found a work around. I created a new file with all the settings, then deleted the sshd_config and then copied the new file to sshd_confg and now it's all good.
Hello,
i had the same issue with my DL980 G7, at first the log said vNIC error, then I did some other research on the web, changing the RAID configuration fixed this problem. I had it set to RAID 50, after changing it to RAID 10, it completely loaded. I also have the same problem with the DL380 G7, hope this fixes the issue as well.
The ESXI HPE 6.0 image worked fine, but I also tried installing 6.5 and 6.7, both red screened. My next plan is try to upgrade, otherwise, I will need to build a custom ESXi the drivers for 6.5 or 6.7.
Brad
According to HPE the DL980 G7 does not support vSphere 6.5 or 6.7. The last supported version of vSphere is 6.0 U3. You're likely going to encounter driver compatibility issues, especially considering HPE is not likely going to provide updated firmware where there are new driver dependencies.
If you do opt to try to incorporate some sort of custom or adapted drivers into a 6.5/6.7 build, that could be interesting from an academic/lab standpoint. I wouldn't recommend it for production workloads though. Good luck!
I also have the same problem with the DL380 G7, hope this fixes the issue as well.
The G7 series is not officially supported past ESXi 6.0, FYI.
Thanks for your reply. I will try that.
Problem solved!
I have also DL380p G8 with first passthrough NVIDIA GTX1050ti for the 1st winddows 10 and second one NVIDIA Quadro K4000 for the second windows 10.
Both of them works without issues!
What I do:
I only passthrough the video and not the HDMI sound.
For sound I use Virtual Sound called VBCABLE.
In advanced options I add the next:
hypervisor.cpuid.v0 = FALSE
pciPassthru.use64bitMMIO = TRUE
pciPassthru.64bitMMIOSizeGB = 16
pciHole.start = 2048
I can restart, shutdown without any issues.
Thanks,
Ubey
Bob, thanks for the reply, very helpful.
Hi,
Now I'm testing NVIDIA RTX2060.
It works with the same options but after restart the windows 10 then I have code 43 issue.
The only option is restart the vmware server, then the RTX2060 works again.
The problem is not solved yet??!!
Thanks,
Ubey
Check your default gateway for the host.
Correct the gateway if you changed recently.
hello
I Have Dell R730 Server , with Perc H730 Mini adapter, my datastore is in raid5
Frequently my event log in Esxi , showing that " Lost access to volume XXX(datastore1) due to connectivity issue.Recovery attemt is in progress "
i check logs from SSh console and saw that " i have only one datastore ,ssd hard drives
ScsiDeviceIO: 2595: Cmd(0x43b58d2ed5c0) 0x28, CmdSN 0x800e0002 from world 35759 to dev "naa.6d09466004ae990022961a0d0b59d80d" failed H:0x8 D:0x0 P:0x0
2019-02-23T10:31:53.119Z cpu11:32873)ScsiDeviceIO: 2595: Cmd(0x43b58d270e40) 0x28, CmdSN 0x800e000b from world 35759 to dev "naa.6d09466004ae990022961a0d0b59d80d" failed H:0x8 D:0x0 P:0x0
2019-02-23T10:31:53.119Z cpu11:32873)lsi_mr3: mfi_TaskMgmt:254: Processing taskMgmt virt reset for device: vmhba0:C2:T0:L0
2019-02-23T10:31:53.119Z cpu11:32873)lsi_mr3: mfi_TaskMgmt:258: VIRT_RESET cmd # 135116065
2019-02-23T10:31:53.119Z cpu11:32873)lsi_mr3: mfi_TaskMgmt:262: ABORT
2019-02-23T10:31:54.120Z cpu11:32873)HBX: 283: 'datastore1': HB at offset 3735552 - Reclaimed heartbeat [Timeout]:
2019-02-23T10:31:54.120Z cpu11:32873) [HB state abcdef02 offset 3735552 gen 91 stampUS 45315583700 uuid 5c707035-ec8b7a6e-c913-801844e2c562 jrnl <FB 1184200> drv 14.61 lockImpl 3]
2019-02-23T10:32:05.555Z cpu1:33511)NMP: nmp_ThrottleLogForDevice:3248: last error status from device naa.6d09466004ae990022961a0d0b59d80d repeated 1 times
2019-02-23T10:32:08.389Z cpu32:63449)User: 3820: sfcb-smx: wantCoreDump:sfcb-smx signal:6 exitCode:0 coredump:disabled
2019-02-23T10:33:34.198Z cpu37:34348 opID=a2a46491)World: 15544: VC opID 8361D15F-00000139-653b maps to vmkernel opID a2a46491
2019-02-23T10:33:34.198Z cpu37:34348 opID=a2a46491)FSS: 5764: Conflict between buffered and unbuffered open (file 'XXXX.vmdk'):flags 0x4008, requested flags 0x40001
2019-02-23T10:33:34.198Z cpu37:34348 opID=a2a46491)FSS: 6249: Failed to open file 'XXXXX.vmdk'; Requested flags 0x40001, world: 34348 [hostd-worker], (Existing flags 0x4008, world: 35921 [vmx]): Busy
2019-02-23T10:33:46.604Z cpu38:34349 opID=3d030e65)World: 15544: VC opID 8361D15F-00000142-6544 maps to vmkernel opID 3d030e65
2019-02-23T10:33:46.604Z cpu38:34349 opID=3d030e65)FSS: 5764: Conflict between buffered and unbuffered open (file 'XXXXXXX.vmdk'):flags 0x4008, requested flags 0x40001
2019-02-23T10:33:46.604Z cpu38:34349 opID=3d030e65)FSS: 6249: Failed to open file 'XXXXXXXXXXX.vmdk'; Requested flags 0x40001, world: 34349 [hostd-worker], (Existing flags 0x4008, world: 35906 [vmx]): Busy
2019-02-23T10:33:47.260Z cpu36:34912 opID=e6a3d49)World: 15544: VC opID 8361D15F-00000144-6546 maps to vmkernel opID e6a3d49
2019-02-23T10:33:47.260Z cpu36:34912 opID=e6a3d49)FSS: 5764: Conflict between buffered and unbuffered open (file 'XXXXXXXXX.vmdk'):flags 0x4008, requested flags 0x40001
2019-02-23T10:33:47.264Z cpu36:34912 opID=e6a3d49)FSS: 6249: Failed to open file 'XXXXXXX.vmdk'; Requested flags 0x40001, world: 34912 [hostd-worker], (Existing flags 0x4008, world: 35758 [vmx]): Busy
2019-02-23T10:35:05.151Z cpu20:33512)ScsiDeviceIO: 2636: Cmd(0x43bd8d89a8c0) 0x1a, CmdSN 0x115f from world 0 to dev "naa.6d09466004ae990022961a0d0b59d80d" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.