Quantcast
Channel: VMware Communities: Message List - ESXi
Viewing all 28826 articles
Browse latest View live

Re: vCPU max latency below a few microseconds ?

$
0
0

Hi John,

 

Thanks a lot for your valuable answer.

 

The host is a Skylake-SP 12-cores single processor (no numa).

We provide 8 cores because the application has 7 threads pinned each to one core and each thread uses 100% cpu all the time. The remaining core is for the linux system.

 

We tried CPU affinity but the result was worst and %RDY was higher.

 

We have to investigate around CStates and AVX turbo states. We will also study the Deep Dive ebook.

 

I will post a feedback here when we get more results.

 

Sylvie


Re: Datastore NFS or iSCSI

$
0
0

It depends, I have used both.  Remember that presenting storage via NFS is different from presenting iSCSI.  With iSCSI, the VMware hosts see block devices which will be formatted with the VMFS (Virtual Machine File System).  NFS presents a file system to be used for storage.

 

A lot of your choice depends on the hardware/software you are running. For example, if you use the NFS server role on Windows Server to present storage - it's going to be a bad experience. Microsoft's implementation of NFS is not very good.  If you use a Synology device and present iSCSI to vSphere, you'll hit severe performance issues!  The above are just known issues with those vendors.  Generally if you buy a dedicated server (Dell, HP, Supermicro) or build your own and use quality network cards (Intel, etc.) you will see similar performance regardless of which protocol you use.

 

One thing to mention between iSCSI and NFS is, with iSCSI you can utilize multipathing and load balancing to provide redundancy and reliability; I believe vSphere still connects via NFS v3 which means you won't have those options.

 

I personally prefer iSCSI, I would rather let vSphere manage the file system

I can not access internet from my vm on esix 6.0

$
0
0

Hey there

 

 

i can not access internet from my vm  ( windows server or ubentu 14.04 )

 

i have ip range from ISP provider  from 198.204.251.194  - 198.204.251.198  5 ips

 

my gateway is 198.204.251.193

 

i am using this ip for main esix 198.204.251.194 .

 

 

what is the problem ?

Re: KMIP 1.1 key management servers for vsphere 6.5 encryption

$
0
0

KeyNexus provides a KMIP server that isn’t connected to encryption or any other services – it is a standalone KMIP server. It’s also easy to setup and can be fully integrated with vSphere.  Here’s the integration guide: https://keynexus.net/wp-content/uploads/keynexus_vsphere_v2.4.pdf 

If you have any questions, direct message me and I can pass along more information.

Re: "inexpensive" KMIP software/provider??

$
0
0

If you’re looking for proper high availability (a standard enterprise feature), $5k/year is in the ballpark for market rates.  KeyNexus would be happy to work closely with you on price given that you are a NPO.  If you have any questions, direct message me and I can pass along more information.

Re: High CPU usage by system process after 49 days of uptime (OCFlush)

$
0
0

Having the same problem here. ESXi: 6.7.0 Update 1 (Build 10302608). It seams is dependent on the hardware. I have the same build installed on 5 different machines and this issue occurs only on one of them : ASRock E3C226D2I mother board with Intel Haswell Core i3-4170 3.7 GHz CPU and 16GB RAM.

Re: High CPU usage by system process after 49 days of uptime (OCFlush)

$
0
0

Maybe we could compare the different device drivers the hosts are using to maybe sort out the culprit which is causing this kind of overflow (?)

Biệt thự 2 tầng mang phong cách hiện đại có sân vườn

$
0
0

Mẫu biệt thự này được thiết kế với diện tích 120m2 với sân vườn rộng rãi.  Nhìn tổng quan từ trên cao xuống, bạn sẽ thấy ngôi biệt thự hiện dại này được thiết kế sang chảnh với 2 mái màu đen, nổi bật giữa màu sơn trắng tạo cảm giác thoáng đãng. Đặc biệt, các kiến trúc sư đã vô cùng khéo léo trong khâu thiết kế căn biet thu 2 tang này bằng cách tận dụng các khoảng đất trống để thiết kế cảnh quan, tiểu cảnh cho ngôi nhà.

Phía cuối ngôi nhà là bể bơi ngoài trời không chỉ giúp ngôi nhà thêm thoáng mát mà đây còn là nơi để các thành viên trong gia đình có thể nghỉ ngơi vào những ngày hè nóng nực.

Căn biệt thự 2 tầng này được thiết kế từ các hình khối cơ bản nhưng vẫn toát lên vẻ hài hòa cân đối giữa các khối chức năng.

Đứng ở sân trước nhìn lên căn biệt thự 2 tầng này, ta có thể thấy một sắc xanh tràn ngập của cây cối và hoa cỏ. Bên cạnh đó, hệ thống cửa kính rộng ở cả tầng 1 và tầng 2 giúp thu được ánh sáng tự nhiên từ bên ngoài vào, giúp không gian nhà trở nên thoáng đãng và mát mẻ hơn, chủ nhà có thể phóng tầm mắt ra, thưởng thức vẻ đẹp của thiên nhiên. Trên tầng 2 của thiết kế biệt thự 2 tầng, ban công được thiết kế đua ra ngoài để tận dụng diện tích trông hoa, cây cảnh nhỏ.

Chúng tôi là đơn vị chuyên thiết kế, thi công và tư vấn những mẫu biệt thự 2 tầng hiện đại. Nếu bạn có nhu cầu, hãy liên hệ ngay với chúng tôi theo số 0981221369 để được tư vấn chi tiết hơn.


Creating Host-only network in esxi 5.5

$
0
0

Can I create a host-only network on ESXi 5.5 with DHCP enabled similar to the host-only network created on VMware workstations ? Any guidance are most welcome.

Re: unable to delete a datastore

$
0
0

Hi ThompsG, Hi hassan

 

First i tried to increase the data, but because of some problem i decide to delete it. Unfortunately it persist because the VM seem to still use it although the partition is delete.

 

ThompsG points that the snapshot is involved. So as ThompsG suggest, il m going to stop the VM, to migrate it ( via à copy) in another host and process a consolidation.

 

to be continued ...

 

thanks for all

Re: Datastore NFS or iSCSI

$
0
0

This paper is a couple of years old but still worth reading, although some details have change: https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/storage_protocol_comparison-white-paper.pdf

 

https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/storage_protocol_comparison-white-paper.pdf

 

Personally, I would probably go for NFS, much easier to configure etc. Unless you need a feature like UNMAP, then you would need to go iSCSI. That is probably the biggest difference right now.

Re: Datastore NFS or iSCSI

$
0
0

I'll have to take a look, thanks!

Re: Datastore NFS or iSCSI

$
0
0

Thanks very much for your comments.

 

I think I will try to create a datastore with NFS and another with iSCSI. I think I can mix different technology.

I don't know if I can find a tools to measure performance. Do you know ?

 

Another question because when I want to create a new volume on my NAS I must choice a file system.

I can use ext3 or ext4 or btrfs.

If I create with btrfs I cannot find the volume on my esx after a rescan? Do you know why ?

Maybe BTRFS is not the best choice...but I thought it to make snapshot on my volume to restore

So actually I have create one datastore with EXT4. What do you think ?

 

Thanks very much

Stopping I/O on vmnic0

$
0
0

Hello,

 

Since a coupe of weeks we experience intermitted connectivity on a daily basis. Though not always on the same time, not always every day, though in the ESXi logs we see the following:

 

2019-03-13T20:06:20.426Z cpu2:2097220)igbn: indrv_UplinkReset:1447: indrv_UplinkReset : vmnic0 device reset started

2019-03-13T20:06:20.426Z cpu2:2097220)igbn: indrv_UplinkQuiesceIo:1411: Stopping I/O on vmnic0

2019-03-13T20:06:20.462Z cpu2:2097220)igbn: indrv_DeviceReset:2306: Device Resetting vmnic0

2019-03-13T20:06:20.462Z cpu2:2097220)igbn: indrv_ChangeState:326: vmnic0: change PF state from 2 to 8

2019-03-13T20:06:20.462Z cpu2:2097220)igbn: indrv_Stop:1890: stopping vmnic0

2019-03-13T20:06:20.462Z cpu2:2097220)igbn: indrv_ChangeState:326: vmnic0: change PF state from 8 to 4

2019-03-13T20:06:20.492Z cpu2:2097220)igbn: indrv_ChangeState:326: vmnic0: change PF state from 4 to 1

2019-03-13T20:06:20.492Z cpu2:2097220)igbn: indrv_ChangeState:326: vmnic0: change PF state from 1 to 20

2019-03-13T20:06:20.493Z cpu2:2097220)igbn: indrv_ChangeState:326: vmnic0: change PF state from 20 to 2

2019-03-13T20:06:20.493Z cpu2:2097220)igbn: indrv_UplinkStartIo:1393: Starting I/O on vmnic0

2019-03-13T20:06:20.507Z cpu2:2097220)igbn: indrv_UplinkReset:1464: indrv_UplinkReset : vmnic0 device reset completed

2019-03-13T20:06:27.426Z cpu2:2097220)NetqueueBal: 5032: vmnic0: device Up notification, reset logical space needed

2019-03-13T20:06:27.427Z cpu3:2212666)NetSched: 654: vmnic0-0-tx: worldID = 2212666 exits

2019-03-13T20:06:27.428Z cpu3:2224632)NetSched: 654: vmnic0-0-tx: worldID = 2224632 exits

2019-03-13T20:06:27.428Z cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 1

2019-03-13T20:06:27.428Z cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 4

...

 

In Syslog we see some more events related as the access switch in the datacenter reports an up/down event:

 

        

Date timeDate/TimeFacilityLevelhostnameMessage tekst
  13/03/201921:06:533/13/19 21:06UserInfovCenter1 2019-03-13T20:06:56.340772+00:00 tpvc-pvvm-003 updatemgr - - - 2019-03-13T20:06:56:339Z 'Activation.trace' 140109948208896 INFO [activationValidator, 1261] Trace objects loaded.
13/03/201921:06:533/13/19 21:06UserInfovCenter1 2019-03-13T20:06:56.340557+00:00 tpvc-pvvm-003 updatemgr - - - 2019-03-13T20:06:56:337Z 'InternalScheduledTasksMgr' 140109948208896 INFO [internalScheduledTasksMgr, 853] Temp directory disk free space is:8407379968
13/03/201921:06:533/13/19 21:06UserInfovCenter1 2019-03-13T20:06:56.340338+00:00 tpvc-pvvm-003 updatemgr - - - 2019-03-13T20:06:56:337Z 'InternalScheduledTasksMgr' 140109948208896 INFO [internalScheduledTasksMgr, 804] Patch store disk free space is:104520380416
13/03/201921:06:533/13/19 21:06UserInfovCenter1 2019-03-13T20:06:56.340114+00:00 tpvc-pvvm-003 updatemgr - - - 2019-03-13T20:06:56:337Z 'InternalScheduledTasksMgr' 140109948208896 INFO [internalScheduledTasksMgr, 303] Internal Scheduled Tasks Manager Timercallback end of this timer slice.....Rescheduling after 300000000 microseconds
13/03/201921:06:533/13/19 21:06UserInfovCenter1 2019-03-13T20:06:56.339871+00:00 tpvc-pvvm-003 updatemgr - - - 2019-03-13T20:06:56:337Z 'InternalScheduledTasksMgr' 140109948208896 INFO [internalScheduledTasksMgr, 724] InvokeCallbacks. Total number ofcallbacks: 7
13/03/201921:06:533/13/19 21:06UserInfovCenter1 2019-03-13T20:06:56.339443+00:00 tpvc-pvvm-003 updatemgr - - - 2019-03-13T20:06:56:337Z 'InternalScheduledTasksMgr' 140109948208896 INFO [internalScheduledTasksMgr, 194] Internal Scheduled Tasks Manager Timercallback...
13/03/201921:06:243/13/19 21:06Local7NoticeNetwork Switch819: Mar 13 2019 20:06:27.732 UTC: %LINEPROTO-5-UPDOWN: Line protocol on Interface GigabitEthernet1/0/2, changed state to up
13/03/201921:06:223/13/19 21:06Local7ErrorNetwork Switch818: Mar 13 2019 20:06:25.686 UTC: %LINK-3-UPDOWN: Interface GigabitEthernet1/0/2, changed state to up
13/03/201921:06:203/13/19 21:06Local7ErrorNetwork Switch817: Mar 13 2019 20:06:22.875 UTC: %LINK-3-UPDOWN: Interface GigabitEthernet1/0/2, changed state to down
13/03/201921:06:193/13/19 21:06Local7NoticeNetwork Switch816: Mar 13 2019 20:06:21.860 UTC: %LINEPROTO-5-UPDOWN: Line protocol on Interface GigabitEthernet1/0/2, changed state to down
13/03/201921:06:173/13/19 21:06UserInfovCenter1 2019-03-13T20:06:21.757084+00:00 tpvc-pvvm-003 updatemgr - - - 2019-03-13T20:06:21:756Z 'VcIntegrity' 140109287241472 INFO [vcIntegrity, 1536] Cannot get IP address for host name: tpvc-pvvm-003
13/03/201921:06:173/13/19 21:06UserInfovCenter1 2019-03-13T20:06:21.752196+00:00 tpvc-pvvm-003 updatemgr - - - 2019-03-13T20:06:21:742Z 'VcIntegrity' 140109287241472 INFO [vcIntegrity, 1519] Getting IP Address from host name: tpvc-pvvm-003
13/03/201921:06:173/13/19 21:06UserInfovCenter1 2019-03-13T20:06:21.742931+00:00 tpvc-pvvm-003 updatemgr - - - 2019-03-13T20:06:21:742Z 'Activation' 140109287241472 INFO [activationValidator, 368] Leave Validate. Succeeded forintegrity.VcIntegrity.retrieveHostIPAddresses on target: Integrity.VcIntegrity
13/03/201921:06:073/13/19 21:06UserInfovCenter1 2019-03-13T20:06:11.089546+00:00 tpvc-pvvm-003 vpxd 4459 - - Event [614748] [1-1] [2019-03-13T20:06:11.089256Z] [vim.event.UserLoginSessionEvent] [info] [TRUEPARTNER\sa-veeam] [] [614748] [UserTRUEPARTNER\sa-veeam@x.x.3.101 logged in as VMware VI Client]
13/03/201921:05:583/13/19 21:05UserDebugvCenter1 2019-03-13T20:06:02.401034+00:00 tpvc-pvvm-003 updatemgr - - - 2019-03-13T20:06:02:400Z 'JobDispatcher' 140109819787008 DEBUG [JobDispatcher, 415] The number of tasks: 0
13/03/201921:05:583/13/19 21:05CronInfovCenter1 2019-03-13T20:06:01.830614+00:00 tpvc-pvvm-003 CROND 55022 - - (root) CMD (. /etc/profile.d/VMware-visl-integration.sh; /usr/lib/applmgmt/backup_restore/scripts/SchedulerCron.py>>/var/log/vmware/applmgmt/backupSchedulerCron.log 2>&1)
13/03/201921:05:573/13/19 21:05CronInfovCenter1 2019-03-13T20:06:01.830222+00:00 tpvc-pvvm-003 CROND 55021 - - (root) CMD ( test -x /usr/sbin/vpxd_periodic && /usr/sbin/vpxd_periodic >/dev/null 2>&1)
13/03/201921:05:573/13/19 21:05UserInfovCenter1 2019-03-13T20:06:01.756909+00:00 tpvc-pvvm-003 updatemgr - - - 2019-03-13T20:06:01:756Z 'VcIntegrity' 140109286442752 INFO [vcIntegrity, 1536] Cannot get IP address for host name: tpvc-pvvm-003
13/03/201921:05:573/13/19 21:05UserInfovCenter1 2019-03-13T20:06:01.752179+00:00 tpvc-pvvm-003 updatemgr - - - 2019-03-13T20:06:01:743Z 'VcIntegrity' 140109286442752 INFO [vcIntegrity, 1519] Getting IP Address from host name: tpvc-pvvm-003
13/03/201921:05:573/13/19 21:05UserInfovCenter1 2019-03-13T20:06:01.744337+00:00 tpvc-pvvm-003 updatemgr - - - 2019-03-13T20:06:01:743Z 'Activation' 140109286442752 INFO [activationValidator, 368] Leave Validate. Succeeded forintegrity.VcIntegrity.retrieveHostIPAddresses on target: Integrity.VcIntegrity

 

Hardware profile:

 

Cisco 3750 switch stack

 

Server hardware:

 

  • Hypervisor:VMware ESXi, 6.7.0, 11675023
  • Model:PowerEdge R720
  • Processor Type:Intel(R) Xeon(R) CPU E5-2609 v2 @ 2.50GHz
  • Logical Processors:8
  • NICs:4
  • Virtual Machines:22
  • State:Connected
  • Uptime:11 days

 

At this moment it is also unknown why VMNIC1 is not taking over the data stream as it is configured to become active when the primary link fails VMNIC0.

 

Any suggestions are welcome and if more information is required then let me know.

 

Regards,

 

Martin Meuwese

Re: Stopping I/O on vmnic0


Re: Creating Host-only network in esxi 5.5

$
0
0

In esxi server, you can not create the same network as workstation as a vSwtich creation is required.

Re: Stopping I/O on vmnic0

$
0
0

Some more details of all logs combined:

 

    

20:05:10.190error hostd[2098841] [Originator@6876 sub=Hostsvc.NsxSpecTracker] Object not found/hostspec disabled
20:05:19Z sdInjector[2098648]: Injector: Sleeping!
20:05:20.016warning hostd[2098657] [Originator@6876 sub=Statssvc] Calculated write I/O size 844686 for scsi0:2 is out of range -- 844686,prevBytes = 81267164160 curBytes =81297572864 prevCommands = 479208curCommands = 479244
20:05:20.016warning hostd[2098657] [Originator@6876 sub=Statssvc] Calculated write I/O size 889928 for scsi0:5 is out of range -- 889928,prevBytes = 900956103680 curBytes =901019288576 prevCommands = 1216542curCommands = 1216613
20:05:40.018warning hostd[2098657] [Originator@6876 sub=Statssvc] Calculated write I/O size 718950 for scsi0:2 is out of range -- 718950,prevBytes = 81297572864 curBytes =81355088896 prevCommands = 479244curCommands = 479324
20:05:40.018warning hostd[2098657] [Originator@6876 sub=Statssvc] Calculated write I/O size 967077 for scsi0:5 is out of range -- 967077,prevBytes = 901019288576 curBytes =901150811136 prevCommands = 1216613curCommands = 1216749
20:05:40.191error hostd[2099336] [Originator@6876 sub=Hostsvc.NsxSpecTracker] Object not found/hostspec disabled
20:05:51Z sdInjector[2098648]: Injector: Sleeping!
20:05:56.278info hostd[2099329] [Originator@6876 sub=Libs opID=ef8bddf0] NetstackInstanceImpl: congestion control algorithm: newreno
20:06:00.017warning hostd[2099341] [Originator@6876 sub=Statssvc] Calculated write I/O size 831981 for scsi0:5 is out of range -- 831981,prevBytes = 901150811136 curBytes =901420372992 prevCommands = 1216749curCommands = 1217073
20:06:10.192error hostd[2098841] [Originator@6876 sub=Hostsvc.NsxSpecTracker] Object not found/hostspec disabled
20:06:20.017warning hostd[2098843] [Originator@6876 sub=Statssvc] Calculated write I/O size 915240 for scsi0:5 is out of range -- 915240,prevBytes = 901420372992 curBytes =901524710400 prevCommands = 1217073curCommands = 1217187
20:06:20.426cpu2:2097220)igbn: indrv_UplinkReset:1447: indrv_UplinkReset : vmnic0 device reset started
20:06:20.426cpu2:2097220)igbn: indrv_UplinkQuiesceIo:1411: Stopping I/O on vmnic0
20:06:20.429[netCorrelator] 899430151427us: [vob.net.uplink.watchdog.timeout] Watchdog timeout occurred for uplink vmnic0
20:06:20.462cpu2:2097220)igbn: indrv_DeviceReset:2306: Device Resetting vmnic0
20:06:20.462cpu2:2097220)igbn: indrv_ChangeState:326: vmnic0: change PF state from 2 to 8
20:06:20.462cpu2:2097220)igbn: indrv_Stop:1890: stopping vmnic0
20:06:20.462cpu2:2097220)igbn: indrv_ChangeState:326: vmnic0: change PF state from 8 to 4
20:06:20.492cpu2:2097220)igbn: indrv_ChangeState:326: vmnic0: change PF state from 4 to 1
20:06:20.492cpu2:2097220)igbn: indrv_ChangeState:326: vmnic0: change PF state from 1 to 20
20:06:20.492cpu2:2097220)igbn: indrv_EnableISR:1060: registering RX IRQ[0]=20
20:06:20.492cpu2:2097220)igbn: indrv_EnableISR:1080: registering TX IRQ[1]=21
20:06:20.492cpu2:2097220)igbn: indrv_EnableISR:1100: registering misc IRQ[2]=22
20:06:20.493cpu2:2097220)igbn: igbn_CheckLink:1272: Link got up for device 0x4307747850c0
20:06:20.493cpu2:2097220)igbn: indrv_ChangeState:326: vmnic0: change PF state from 20 to 2
20:06:20.493cpu2:2097220)igbn: indrv_UplinkStartIo:1393: Starting I/O on vmnic0
20:06:20.507cpu2:2097220)igbn: indrv_UplinkReset:1464: indrv_UplinkReset : vmnic0 device reset completed
20:06:20.507cpu2:2097220)igbn: indrv_EventISR:922: Event ISR called on pf 0x4307747850c0
20:06:20.507cpu0:2097599)igbn: indrv_Worker:2032: Checking async events for device 0x4307747850c0
20:06:20.507cpu0:2097599)igbn: igbn_CheckLink:1272: Link went down for device 0x4307747850c0
20:06:20.508[netCorrelator] 899430232265us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: UAT Servers. 0 uplinks up. Failed criteria: 128
20:06:20.508[netCorrelator] 899430232269us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: Production servers. 0 uplinks up. Failed criteria:128
20:06:20.508[netCorrelator] 899430232271us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: VM Network. 0 uplinks up. Failed criteria: 128
20:06:20.508[netCorrelator] 899430232272us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: Management Network. 0 uplinks up. Failed criteria:128
20:06:20.508[netCorrelator] 899430232281us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: UAT Servers. 1 uplinks up. Failed criteria: 128
20:06:20.508[netCorrelator] 899430232282us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: Production servers. 1 uplinks up. Failed criteria:128
20:06:20.508[netCorrelator] 899430232283us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: VM Network. 1 uplinks up. Failed criteria: 128
20:06:20.508[netCorrelator] 899430232284us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: Management Network. 1 uplinks up. Failed criteria:128
20:06:20.508[netCorrelator] 899430232291us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: UAT Servers. 1 uplinks up. Failed criteria: 128
20:06:20.509[netCorrelator] 899430232292us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: Production servers. 1 uplinks up. Failed criteria:128
20:06:20.509[netCorrelator] 899430232293us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: VM Network. 1 uplinks up. Failed criteria: 128
20:06:20.509[netCorrelator] 899430232294us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: Management Network. 1 uplinks up. Failed criteria:128
20:06:20.509[netCorrelator] 899430232301us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: UAT Servers. 1 uplinks up. Failed criteria: 128
20:06:20.509[netCorrelator] 899430232302us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: Production servers. 1 uplinks up. Failed criteria:128
20:06:20.509[netCorrelator] 899430232303us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: VM Network. 1 uplinks up. Failed criteria: 128
20:06:20.509[netCorrelator] 899430232304us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: Management Network. 1 uplinks up. Failed criteria:128
20:06:20.509[netCorrelator] 899430232311us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: UAT Servers. 1 uplinks up. Failed criteria: 128
20:06:20.509[netCorrelator] 899430232312us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: Production servers. 1 uplinks up. Failed criteria:128
20:06:20.509[netCorrelator] 899430232313us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: VM Network. 1 uplinks up. Failed criteria: 128
20:06:20.509[netCorrelator] 899430232314us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: Management Network. 1 uplinks up. Failed criteria:128
20:06:20.509[netCorrelator] 899430232320us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: UAT Servers. 1 uplinks up. Failed criteria: 128
20:06:20.509[netCorrelator] 899430232321us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: Production servers. 1 uplinks up. Failed criteria:128
20:06:20.509[netCorrelator] 899430232322us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: VM Network. 1 uplinks up. Failed criteria: 128
20:06:20.509[netCorrelator] 899430232323us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: Management Network. 1 uplinks up. Failed criteria:128
20:06:20.509[netCorrelator] 899430232329us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: UAT Servers. 1 uplinks up. Failed criteria: 128
20:06:20.509[netCorrelator] 899430232330us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: Production servers. 1 uplinks up. Failed criteria:128
20:06:20.586warning hostd[2099334] [Originator@6876 sub=Hostsvc.Tpm20Provider opID=ef8bde0c] Unable to retrieve TPM/TXT status. TPM functionality will be unavailable. Failurereason: Unable to get node: Sysinfo error: Not foundSee VMkernel lo.
20:06:20.605info hostd[2099334] [Originator@6876 sub=Libs opID=ef8bde0c] IOFilterInfoImpl: Inbox-IOFilter Id: VMW_spm_1.0.0, localId: spm
20:06:20.605info hostd[2099334] [Originator@6876 sub=Libs opID=ef8bde0c] IOFilterInfoImpl: Inbox-IOFilter Id: VMW_vmwarevmcrypt_1.0.0, localId: vmwarevmcrypt
20:06:20.609info hostd[2099334] [Originator@6876 sub=Libs opID=ef8bde0c] PluginLdr_Load: Loaded plugin 'libvmiof-disk-spm.so' from '/usr/lib64/vmware/plugin/libvmiof-disk-spm.so'
20:06:20.611info hostd[2099334] [Originator@6876 sub=Libs opID=ef8bde0c] PluginLdr_Load: Loaded plugin 'libvmiof-disk-vmwarevmcrypt.so' from '/usr/lib64/vmware/plugin/libvmiof-disk-vmwarevmcrypt.so'
20:06:23Z sdInjector[2098648]: Injector: Sleeping!
20:06:24.283cpu0:2097599)igbn: indrv_Worker:2032: Checking async events for device 0x4307747850c0
20:06:24.283cpu2:2100334)igbn: indrv_EventISR:922: Event ISR called on pf 0x4307747850c0
20:06:24.283cpu0:2097599)igbn: igbn_CheckLink:1272: Link got up for device 0x4307747850c0
20:06:24.283[netCorrelator] 899434008232us: [vob.net.vmnic.linkstate.up] vmnic vmnic0 linkstate up
20:06:24.362warning hostd[2099342] [Originator@6876 sub=Hostsvc.Tpm20Provider opID=ef8bde18] Unable to retrieve TPM/TXT status. TPM functionality will be unavailable. Failurereason: Unable to get node: Sysinfo error: Not foundSee VMkernel lo.
20:06:24.384[netCorrelator] 899434108455us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: UAT Servers. 0 uplinks up
20:06:24.384[netCorrelator] 899434108459us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: Production servers. 0 uplinks up
20:06:24.384[netCorrelator] 899434108460us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: VM Network. 0 uplinks up
20:06:24.384[netCorrelator] 899434108461us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: Management Network. 0 uplinks up
20:06:24.384[netCorrelator] 899434108469us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: UAT Servers. 2 uplinks up
20:06:24.384[netCorrelator] 899434108470us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: Production servers. 2 uplinks up
20:06:24.384[netCorrelator] 899434108471us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: VM Network. 2 uplinks up
20:06:24.384[netCorrelator] 899434108472us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: Management Network. 2 uplinks up
20:06:24.384[netCorrelator] 899434108478us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: UAT Servers. 2 uplinks up
20:06:24.384[netCorrelator] 899434108479us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: Production servers. 2 uplinks up
20:06:24.384[netCorrelator] 899434108480us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: VM Network. 2 uplinks up
20:06:24.384[netCorrelator] 899434108481us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: Management Network. 2 uplinks up
20:06:24.384[netCorrelator] 899434108486us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: UAT Servers. 2 uplinks up
20:06:24.384[netCorrelator] 899434108487us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: Production servers. 2 uplinks up
20:06:24.384[netCorrelator] 899434108488us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: VM Network. 2 uplinks up
20:06:24.384[netCorrelator] 899434108489us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: Management Network. 2 uplinks up
20:06:24.384[netCorrelator] 899434108494us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: UAT Servers. 2 uplinks up
20:06:24.384[netCorrelator] 899434108495us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: Production servers. 2 uplinks up
20:06:24.384[netCorrelator] 899434108496us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: VM Network. 2 uplinks up
20:06:24.384[netCorrelator] 899434108497us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: Management Network. 2 uplinks up
20:06:24.384[netCorrelator] 899434108502us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: UAT Servers. 2 uplinks up
20:06:24.384[netCorrelator] 899434108503us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: Production servers. 2 uplinks up
20:06:24.384[netCorrelator] 899434108504us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: VM Network. 2 uplinks up
20:06:24.384[netCorrelator] 899434108505us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: Management Network. 2 uplinks up
20:06:24.385info hostd[2099342] [Originator@6876 sub=Libs opID=ef8bde18] IOFilterInfoImpl: Inbox-IOFilter Id: VMW_spm_1.0.0, localId: spm
20:06:24.385info hostd[2099342] [Originator@6876 sub=Libs opID=ef8bde18] IOFilterInfoImpl: Inbox-IOFilter Id: VMW_vmwarevmcrypt_1.0.0, localId: vmwarevmcrypt
20:06:24.385[netCorrelator] 899434108510us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: UAT Servers. 2 uplinks up
20:06:24.385[netCorrelator] 899434108511us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: Production servers. 2 uplinks up
20:06:24.385[netCorrelator] 899434108512us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: VM Network. 2 uplinks up
20:06:24.385[netCorrelator] 899434108512us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: Management Network. 2 uplinks up
20:06:24.385[netCorrelator] 899434108517us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: UAT Servers. 2 uplinks up
20:06:24.385[netCorrelator] 899434108518us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: Production servers. 2 uplinks up
20:06:24.385[netCorrelator] 899434108519us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: VM Network. 2 uplinks up
20:06:24.385[netCorrelator] 899434108520us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: Management Network. 2 uplinks up
20:06:24.385[netCorrelator] 899434108525us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: UAT Servers. 2 uplinks up
20:06:24.387info hostd[2099342] [Originator@6876 sub=Libs opID=ef8bde18] PluginLdr_Load: Loaded plugin 'libvmiof-disk-spm.so' from '/usr/lib64/vmware/plugin/libvmiof-disk-spm.so'
20:06:24.388info hostd[2099342] [Originator@6876 sub=Libs opID=ef8bde18] PluginLdr_Load: Loaded plugin 'libvmiof-disk-vmwarevmcrypt.so' from '/usr/lib64/vmware/plugin/libvmiof-disk-vmwarevmcrypt.so'
20:06:25.345warning hostd[2098658] [Originator@6876 sub=VigorStatsProvider(0000009353bad2b0)] AddVirtualMachine: VM '64' already registered
20:06:25.345warning hostd[2098658] [Originator@6876 sub=VigorStatsProvider(0000009353bad2b0)] AddVirtualMachine: VM '79' already registered
20:06:27.002info hostd[2098657] [Originator@6876 sub=Hostsvc.VmkVprobSource] VmkVprobSource::Post event: (vim.event.EventEx) {
20:06:27.002[netCorrelator] 899462940618us: [esx.problem.net.vmnic.watchdog.reset] Uplink vmnic0 has recovered from a transient failure due to watchdog timeout
20:06:27.003info hostd[2098657] [Originator@6876 sub=Vimsvc.ha-eventmgr] Event 1355 : Uplink vmnic0 has recovered from a transient failure due to watchdog timeout
20:06:27.426cpu2:2097220)NetqueueBal: 5032: vmnic0: device Up notification, reset logical space needed
20:06:27.426cpu2:2097220)NetPort: 1580: disabled port 0x2000002
20:06:27.427cpu3:2212666)NetSched: 654: vmnic0-0-tx: worldID = 2212666 exits
20:06:27.427cpu2:2097220)Uplink: 11681: enabled port 0x2000002 with mac ec:f4:bb:c4:f9:1c
20:06:27.428cpu2:2097220)Uplink: 537: Driver claims supporting 0 RX queues, and 0 queues are accepted.
20:06:27.428cpu2:2097220)Uplink: 533: Driver claims supporting 0 TX queues, and 0 queues are accepted.
20:06:27.428cpu2:2097220)NetPort: 1580: disabled port 0x2000002
20:06:27.428cpu3:2224632)NetSched: 654: vmnic0-0-tx: worldID = 2224632 exits
20:06:27.428cpu2:2097220)Uplink: 11681: enabled port 0x2000002 with mac ec:f4:bb:c4:f9:1c
20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 1
20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 4
20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 9
20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 8
20:06:27.428cpu2:2097220)igbn: igbn_ChangeUplinkCap:1170: toggled hw VLAN offload off
20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 7
20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 3
20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 5
20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 10
20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 6
20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 11
20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 13
20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 1
20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 4
20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 9
20:06:27.428cpu2:2097220)igbn: igbn_ChangeUplinkCap:1179: toggled hw VLAN offload on
20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 8
20:06:27.428cpu2:2097220)igbn: igbn_ChangeUplinkCap:1170: toggled hw VLAN offload on
20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 7
20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 3
20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 5
20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 10
20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 6
20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 11
20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 13
20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 1
20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 4
20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 9
20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 8
20:06:27.428cpu2:2097220)igbn: igbn_ChangeUplinkCap:1170: toggled hw VLAN offload off
20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 7
20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 3
20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 5
20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 10
20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 6
20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 11
20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 13
20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 1
20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 4
20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 9
20:06:27.428cpu2:2097220)igbn: igbn_ChangeUplinkCap:1179: toggled hw VLAN offload on
20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 8
20:06:27.428cpu2:2097220)igbn: igbn_ChangeUplinkCap:1170: toggled hw VLAN offload on
20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 7
20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 3
20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 5
20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 10
20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 6
20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 11
20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 13
20:06:29.662info hostd[2099336] [Originator@6876 sub=Libs opID=HB-host-9@27611-9c06fe0-75-de22 user=vpxuser] NetstackInstanceImpl: congestion control algorithm: newreno
20:06:29.664warning hostd[2099336] [Originator@6876 sub=Hostsvc.Tpm20Provider opID=HB-host-9@27611-9c06fe0-75-de22 user=vpxuser] Unable to retrieve TPM/TXT status. TPM functionality will be unavailable. Failure reason: Unable to get node: Sys.
20:06:29.718info hostd[2099336] [Originator@6876 sub=Libs opID=HB-host-9@27611-9c06fe0-75-de22 user=vpxuser] Could not expand environment variable HOME.
20:06:29.723info hostd[2099336] [Originator@6876 sub=Libs opID=HB-host-9@27611-9c06fe0-75-de22 user=vpxuser] Could not expand environment variable HOME.
20:06:40.015warning hostd[2098841] [Originator@6876 sub=Statssvc] Calculated write I/O size 989341 for scsi0:5 is out of range -- 989341,prevBytes = 901524710400 curBytes =901563294720 prevCommands = 1217187curCommands = 1217226
20:06:40.195error hostd[2099330] [Originator@6876 sub=Hostsvc.NsxSpecTracker] Object not found/hostspec disabled
20:06:50.427cpu4:2097641)NMP: nmp_ResetDeviceLogThrottling:3569: Error status H:0x0 D:0x2 P:0x0 Sense Data: 0x5 0x24 0x0 from dev "mpx.vmhba32:C0:T0:L0" occurred 1 times(of1commands)
20:06:55Z sdInjector[2098648]: Injector: Sleeping!
20:07:10.197error hostd[2098845] [Originator@6876 sub=Hostsvc.NsxSpecTracker] Object not found/hostspec disabled
20:07:27Z sdInjector[2098648]: Injector: Sleeping!
20:07:31.676info hostd[2099342] [Originator@6876 sub=Libs opID=ef8bde70] NetstackInstanceImpl: congestion control algorithm: newreno
20:07:40.201error hostd[2099329] [Originator@6876 sub=Hostsvc.NsxSpecTracker] Object not found/hostspec disabled
20:07:59Z sdInjector[2098648]: Injector: Sleeping!

Re: Stopping I/O on vmnic0

$
0
0

Hello Diego,

 

Before we upgraded to ESXi 6.7 we checked the hardware compatibility list as well and checked with the vendor (Dell) if this was the case. Confirmation was received by the vendor that the server supports ESXi 6.7 though was not yet added to the hardware compatibility list.

 

Also all other servers are not experiencing this issue.

 

Regards,

 

Martin

Re: Datastore NFS or iSCSI

$
0
0

> If I create with btrfs I cannot find the volume on my esx after a rescan? Do you know why ?
Please explain !
If you use NFS the NFS-server handles the filesystem and the ESXi should not notice any difference between ext or btrfs.
ESXi itself can not read ext3, ext4, btrfs at all.

Re: Creating Host-only network in esxi 5.5

$
0
0

You can use ESXi to create something thats close to a Host-only-network like you know it from Workstation
BUT you would need to create a VM that acts as a DHCP-server yourself.

Viewing all 28826 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>