Thursday, May 2, 2019

ESXi 5.x 升级 6.0 命令行

由于许多人开始将其ESXi软件升级到最新版本,因此需要了解各种步骤。在本教程文章中,我将解释如何通过5个简单步骤将ESXi 5.1升级到ESXi 6.0。   

在开始之前:
在对托管虚拟机的生产VMware vSphere Hypervisor ESXi 5.1主机服务器进行任何更改之前,确保拥有虚拟机的有效备份非常重要。此外,必须关闭所有虚拟机,并且服务器应处于维护模式。

以下是升级软件的步骤:

  1。从VMware下载ESXi 6.0 Offline Bundle -  此处
 2.将脱机捆绑包上载到ESXi 5.1数据存储
 
3.通过SSH连接ESXi 5.1主机服务器
  • 在控制台提示符下,键入vmware -l以确认ESXi版本。应返回VMware ESXi 5.1.0 GA或更高版本,但这取决于是否已应用任何修补程序。
 
4.在控制台命令行中使用esxcli来升级服务器
  • 在控制台中,键入以下内容:

esxcli software vib update -d /vmfs/volumes/datastore1/VMware-ESXi-6.0.0-2494585-depot.zip

  • 如果输入正确,上述命令不会产生任何反馈。大约20秒后,升级应该完成。但是,我们发现如果您使用慢速USB闪存驱动器和SD卡,这可能需要相当长的时间才能写入文件。完成后,您将看到以下屏幕:
  • 如果一切顺利,您应该看到如上所示的更新屏幕,但您可能需要滚动屏幕才能看到所有信息。检查顶部的更新结果。如果更新成功完成,则需要重新启动系统才能使更改生效。
  • 您可以看到已安装,删除的新VIB软件包(驱动程序包)以及未跳过的任何软件包。如果您使用的是VMware ESXi 5.1的OEM版本(例如,来自HP,IBM或Dell),则输出可能会有所不同。
  • 键入“rebo​​ot”以重新启动服务器。
 
5.检查ESXi 5.1主机服务器是否已升级
  • 检查以下内容:
    • 控制台屏幕应报告VMware ESXi 6.0.0(VMKernel Release Build 2809209)
恭喜,您已成功将主机服务器从VMware ESXi 5.1升级到VMware ESXi 6.0。

基本VMware文章系列: 在本系列文章中,VMware发布了VMware vSphere 5.5和VMware vSphere Hypervisor ESXi 5.5。这些文章也适用于VMware vSphere Hypervisor ESXi 5.0和5.5。为了保持一致性,我在本系列中使用了VMware vSphere Hypervisor ESXi 5.1:



实验环境是:Esxi5.1 --->6.5

升级过程:Esxi5.1--->Esxi6.0---->Esxi6.5

升级命令:  

  1. 5.1|5.5 --> 6.0
  2. esxcli software sources profile list --depot=/vmfs/volumes/5384d7fc-83b4c32c-e5a5-c81f66bca93b/ESXi600-201706001.zip
  3. esxcli software profile update -d /vmfs/volumes/5384d7fc-83b4c32c-e5a5-c81f66bca93b/ESXi600-201706001.zip -p ESXi-6.0.0-20170604001-standard
  4. 6.0 --> 6.5
  5. esxcli software sources profile list --depot=/vmfs/volumes/5384d7fc-83b4c32c-e5a5-c81f66bca93b/update-from-esxi6.5-6.5_update02.zip
  6. esxcli software profile update -d /vmfs/volumes/5384d7fc-83b4c32c-e5a5-c81f66bca93b/update-from-esxi6.5-6.5_update02.zip -p ESXi-6.5.0-20180502001-standard  

升级过程遇到的问题:

1. 兼容性问题:

  1. [root@localhost:/vmfs/volumes/5384d7fc-83b4c32c-e5a5-c81f66bca93b] esxcli software profile update -d /vmfs/volumes/5384d7fc-83b4c32c-e5a5-c81f66bca93b/ESXi650-201701001.zip -p ESXi-6.5.0-20170104001-standard
  2. [DependencyError]
  3. VIB QLogic_bootbank_scsi-qla4xxx_634.5.7.0-1OEM.500.0.0.472560 requires com.vmware.driverAPI-9.2.0.0, but the requirement cannot be satisfied within the ImageProfile.
  4. VIB QLogic_bootbank_scsi-qla4xxx_634.5.7.0-1OEM.500.0.0.472560 requires vmkapi_2_0_0_0, but the requirement cannot be satisfied within the ImageProfile.
  5. VIB Emulex_bootbank_scsi-be2iscsi_4.2.324.12-1OEM.500.0.0.472629 requires vmkapi_2_0_0_0, but the requirement cannot be satisfied within the ImageProfile.
  6. VIB QLogic_bootbank_net-qlcnic_5.0.750-1OEM.500.0.0.472560 requires com.vmware.driverAPI-9.2.0.0, but the requirement cannot be satisfied within the ImageProfile.
  7. VIB Brocade_bootbank_net-bna_3.2.0.0-1OEM.500.0.0.472560 requires vmkapi_2_0_0_0, but the requirement cannot be satisfied within the ImageProfile.
  8. VIB Emulex_bootbank_scsi-be2iscsi_4.2.324.12-1OEM.500.0.0.472629 requires com.vmware.iscsi_linux-9.2.0.0, but the requirement cannot be satisfied within the ImageProfile.
  9. VIB Emulex_bootbank_scsi-be2iscsi_4.2.324.12-1OEM.500.0.0.472629 requires com.vmware.driverAPI-9.2.0.0, but the requirement cannot be satisfied within the ImageProfile.
  10. VIB Brocade_bootbank_net-bna_3.2.0.0-1OEM.500.0.0.472560 requires com.vmware.driverAPI-9.2.0.0, but the requirement cannot be satisfied within the ImageProfile.
  11. VIB VMware_bootbank_net-qlge_2.0.0.54-1vmw.500.0.0.472560 requires vmkapi_2_0_0_0, but the requirement cannot be satisfied within the ImageProfile.
  12. VIB Intel_bootbank_net-ixgbe_3.11.32-1OEM.500.0.0.472560 requires com.vmware.driverAPI-9.2.0.0, but the requirement cannot be satisfied within the ImageProfile.
  13. VIB Intel_bootbank_net-ixgbe_3.11.32-1OEM.500.0.0.472560 requires vmkapi_2_0_0_0, but the requirement cannot be satisfied within the ImageProfile.
  14. VIB QLogic_bootbank_net-qlcnic_5.0.750-1OEM.500.0.0.472560 requires vmkapi_2_0_0_0, but the requirement cannot be satisfied within the ImageProfile.
  15. VIB VMware_bootbank_net-qlge_2.0.0.54-1vmw.500.0.0.472560 requires com.vmware.driverAPI-9.2.0.0, but the requirement cannot be satisfied within the ImageProfile.
  16. VIB Brocade_bootbank_scsi-bfa_3.2.0.0-1OEM.500.0.0.472560 requires com.vmware.driverAPI-9.2.0.0, but the requirement cannot be satisfied within the ImageProfile.
  17. VIB QLogic_bootbank_scsi-qla4xxx_634.5.7.0-1OEM.500.0.0.472560 requires com.vmware.iscsi_linux-9.2.0.0, but the requirement cannot be satisfied within the ImageProfile.
  18. VIB VMware_bootbank_ehci-ehci-hcd_1.0-4vmw.600.3.69.5572656 requires com.vmware.usb-9.2.3.0, but the requirement cannot be satisfied within the ImageProfile.
  19. VIB Brocade_bootbank_scsi-bfa_3.2.0.0-1OEM.500.0.0.472560 requires vmkapi_2_0_0_0, but the requirement cannot be satisfied within the ImageProfile.
  20. Please refer to the log file for more details.

  解决方案从这里找到的:https://communities.vmware.com/message/2734060

  1. #获取已经安装的软件包
  2. esxcli software vib list
  3. #根据报错可以看到下面几个软件包影响了升级,需要将其删除
  4. net-bna 3.2.0.0-1OEM.500.0.0.472560 Brocade VMwareCertified 2014-05-27
  5. scsi-bfa 3.2.0.0-1OEM.500.0.0.472560 Brocade VMwareCertified 2014-05-27
  6. dell-configuration-vib 5.1-0A02 Dell PartnerSupported 2014-05-27
  7. ima-be2iscsi 4.2.324.12-1OEM.500.0.0.472629 Emulex VMwareCertified 2014-05-27
  8. scsi-be2iscsi 4.2.324.12-1OEM.500.0.0.472629 Emulex VMwareCertified 2014-05-27
  9. net-ixgbe 3.11.32-1OEM.500.0.0.472560 Intel VMwareCertified 2014-05-27
  10. ima-qla4xxx 500.2.01.31-1vmw.0.0.060523 QLogic VMwareCertified 2014-05-27
  11. net-qlcnic 5.0.750-1OEM.500.0.0.472560 QLogic VMwareCertified 2014-05-27
  12. scsi-qla4xxx 634.5.7.0-1OEM.500.0.0.472560 QLogic VMwareCertified 2014-05-27
  13. net-qlge 2.0.0.54-1vmw.500.0.0.472560 VMware VMwareCertified 2014-05-27
  14. #删除软件包命令
  15. esxcli software vib remove -n <vibname>

  再执行升级命令既可。每次更新修改都需要重启服务器生效。


一、VMware vSphere部署

  http://herunmin.blog.51cto.com/5586997/1206554 ESXi 5.1 安装 Mac OSX Lion 10.7

二、故障排查:

  1.vSphere Client无法连接vSphere server解决一例 

    参考:http://bbs.51cto.com/thread-1108272-1.html

   

  首先从网络的角度来测试:

    ping IP   #正常ping通,说明网络链路正常

  然后查看端口情况:vm exsi默认使用端口443 902

    443端口主要负责别名讯息的传递,而902端口主要负责远端控制台画面的传递

    telnet IP 902 #正常telnet 说明该端口正常

    telnet IP 443 #telnet失败,说明该端口出现异常,查两方面:防火墙映射和服务器端口是监听

  登录服务器端运行:nc -v -z 127.0.0.1 443 #nc: connect to 127.0.0.1 port 443 (tcp) failed: Connection refused

      说明此时服务器端口443未被监听。那就想法让他监听上。

  资料查明:https://communities.vmware.com/message/2160793#2160793

    服务运行负责监听端口 /etc/init.d/rhttpproxy ,然后打开该文件发现命令/sbin/watchdog.sh -d -s $RHTTPPROXY_TAG rhttpproxy ++min=0,swapscope=system,group=hostd -r /etc/vmware/rhttpproxy/config.xml指向一个配置文件,那么接下来就需要修改这个配置文件。

    /etc/vmware/rhttpproxy/config.xml文件信息,里面记录了https监听443.尝试重启服务/etc/init.d/rhttpproxy restart

  

  1. <!-- RhttpProxy configuration file for ESX/ESXi -->
  2.  
  3. <config>
  4. <!-- the version of this config file -->
  5. <version>5.5.0.0</version>
  6.  
  7. <!-- working directory -->
  8. <workingDir>/var/log/vmware/</workingDir>
  9.  
  10. <!-- location to examine for configuration files that are needed -->
  11. <defaultConfigPath> /etc/vmware/ </defaultConfigPath>
  12.  
  13. <log>
  14. <!-- controls where rolling log files are stored -->
  15. <directory>/var/log/vmware/</directory>
  16.  
  17. <!-- name of log file -->
  18. <name>rhttpproxy</name>
  19.  
  20. <!-- controls whether logger sends its output to console also -->
  21. <outputToConsole>false</outputToConsole>
  22.  
  23. <!-- If true, log to files on disk -->
  24. <outputToFiles>false</outputToFiles>
  25.  
  26. <!-- default size(in bytes) of each log file before rolling over to next -->
  27. <maxFileSize>524288</maxFileSize>
  28.  
  29. <!-- default number of log files to rotate amongst -->
  30. <maxFileNum>8</maxFileNum>
  31.  
  32. <!-- default log level -->
  33. <level>verbose</level>
  34.  
  35. <!-- If true, logs to syslog -->
  36. <outputToSyslog>true</outputToSyslog>
  37.  
  38. <!-- syslog configuration. Only used if outputToSyslog is true. -->
  39. <syslog>
  40. <!-- syslog identifier to use when logging -->
  41. <ident>Rhttpproxy</ident>
  42.  
  43. <!-- syslog facility to use when logging -->
  44. <facility>local4</facility>
  45.  
  46. <!-- The section header contents are placed in this file at startup.
  47. When vmsyslogd rotates the hostd log file, it logs the content of this
  48. file to syslog. This is being done so that we don't lose context on log
  49. rotations.
  50. IMPORTANT: Value needs to match that in onrotate entry in
  51. vmsyslog.d/hostd.conf
  52. -->
  53. <logHeaderFile>/var/run/vmware/rhttpproxyLogHeader.txt</logHeaderFile>
  54. </syslog>
  55. </log>
  56.  
  57. <proxy>
  58. <!-- default location of the proxy config file -->
  59. <endpoints>/etc/vmware/rhttpproxy/endpoints.conf</endpoints>
  60.  
  61. <!-- HTTP port to be used by the reverse proxy -->
  62. <httpPort>80</httpPort>
  63.  
  64. <!-- HTTPS port to be used by the reverse proxy -->
  65. <httpsPort>443</httpsPort>
  66. </proxy>
  67.  
  68. <!-- Remove the following node to disable SSL -->
  69. <ssl>
  70. <!-- The server private key file -->
  71. <privateKey>/etc/vmware/ssl/rui.key</privateKey>
  72.  
  73. <!-- The server side certificate file -->
  74. <certificate>/etc/vmware/ssl/rui.crt</certificate>
  75. </ssl>
  76.  
  77. <vmacore>
  78. <pluginBaseDir>/lib/</pluginBaseDir>
  79. <!-- default thread pool configuration for Posix impl -->
  80. <threadPool>
  81. <IoMin>2</IoMin>
  82. <IoMax>44</IoMax>
  83. <TaskMin>2</TaskMin>
  84. <TaskMax>18</TaskMax>
  85. <!-- Do not set MaxFdsPerThread if hostdMinFds is set above -->
  86. <!-- MaxFdsPerThread> 2048 </MaxFdsPerThread -->
  87. <NumKeepAlive>8</NumKeepAlive>
  88. <ThreadCheckTimeSecs>600</ThreadCheckTimeSecs>
  89. <ThreadStackSizeKb>256</ThreadStackSizeKb>
  90. <threadNamePrefix>rhttpproxy</threadNamePrefix>
  91. </threadPool>
  92.  
  93. <rootPasswdExpiration>false</rootPasswdExpiration>
  94.  
  95. <ssl>
  96. <doVersionCheck> false </doVersionCheck>
  97. <useCompression>true</useCompression>
  98. <libraryPath>/lib/</libraryPath>
  99. </ssl>
  100.  
  101. <!-- enable plugin loading -->
  102. <loadPlugins> false </loadPlugins>
  103.  
  104. <!-- enable/disable the dynamic loading of plugins -->
  105. <loadDynamicPlugins> false </loadDynamicPlugins>
  106.  
  107. <!-- should the RefTracker be enabled? -->
  108. <!-- <useRefTracker>false</useRefTracker> -->
  109.  
  110. <!-- Enable/disable the stack tracer -->
  111. <!-- <useStackTracer>false</useStackTracer> -->
  112.  
  113. <xml>
  114. <doc>
  115. <!-- maximum size of each XML message. -->
  116. <maxChars>62914560</maxChars>
  117. <!-- maximum nodes in of each XML message. -->
  118. <maxNodes>524288</maxNodes>
  119. <!-- maximum node depth of each XML message. -->
  120. <maxDepth>1000</maxDepth>
  121. </doc>
  122. </xml>
  123.  
  124. <http>
  125. <!-- Num of max proxy connections -->
  126. <!-- PR 604415: Temporary lower the connections limit to 128 -->
  127. <maxConnections> 128 </maxConnections>
  128. </http>
  129. </vmacore>
  130. </config>

    nc -v -z 127.0.0.1 443   #Connection to 127.0.0.1 443 port [tcp/https] succeeded!

  测试客户端连接。正常。

  总结:这次问题解决了一下午。需要总结的就是对于这个虚拟化平台的命令行参数不清楚。幸亏华为技术帮忙找到nc命令才定位到443端口没有起来。在此谢过了。

三、常用命令参考:

  1. ESX与ESXi管理员必备25个命令 http://www.cnblogs.com/frostx/p/3705942.html

  2.nc命令参数:

    nc [-hlnruz][-g<网关...>][-G<指向器数目>][-i<延迟秒数>][-o<输出文件>][-p<通信端口>][-s<来源位址>][-v...][-w<超时秒数>][主机名称][通信端口...]

  • -g<网关> 设置路由器跃程通信网关,最丢哦可设置8个。
  • -G<指向器数目> 设置来源路由指向器,其数值为4的倍数。
  • -h 在线帮助。
  • -i<延迟秒数> 设置时间间隔,以便传送信息及扫描通信端口。
  • -l 使用监听模式,管控传入的资料。
  • -n 直接使用IP地址,而不通过域名服务器。
  • -o<输出文件> 指定文件名称,把往来传输的数据以16进制字码倾倒成该文件保存。
  • -p<通信端口> 设置本地主机使用的通信端口。
  • -r 乱数指定本地与远端主机的通信端口。
  • -s<来源位址> 设置本地主机送出数据包的IP地址。
  • -u 使用UDP传输协议。
  • -v 显示指令执行过程。
  • -w<超时秒数> 设置等待连线的时间。
  • -z 使用0输入/输出模式,只在扫描通信端口时使用。
    示例:
     TCP端口扫描 nc -u -z -w2 192.168.0.1 1-1000 //扫描192.168.0.3 的端口 范围是 1-1000
      nc -nvv 192.168.0.1 80 //扫描 80端口

3.查看路由信息命令:  

  • esxcfg-route -l
  1. VMkernel Routes:
  2. Network Netmask Gateway Interface
  3. 10.124.177.0 255.255.255.0 Local Subnet vmk0
  4. default 0.0.0.0 10.124.177.1 vmk0

   esxcli network ip route ipv4 list

  1. Network Netmask Gateway Interface Source
  2. ------------ ------------- ------------ --------- ------
  3. default 0.0.0.0 10.124.177.1 vmk0 MANUAL
  4. 10.124.177.0 255.255.255.0 0.0.0.0 vmk0 MANUAL

  

  4.查看hostd状态

  /etc/init.d/hostd status

  1. hostd is running.

  /etc/init.d/hostd {start|stop|restart|status}

No comments:

Post a Comment