Any sort of deployment should not only consider the immediate needs, but also an potential future requirements. How many times have we been in the “I wished I knew this before” or “I didn’t realise it back then” situations. We can plan as much as we can, but sometimes the most minute details can escape us.
IP address assignment to a Nutanix HCI cluster can be simple enough for immediate needs, but also important enough to plan for the future. Let’s start by laying a few requirements.
Basic IP Address Requirements
Each Nutanix HCI node has 3 key elements, hypervisor Host, Nutanix CVM, IPMI. In standard implementations, each element only needs one IP address. At this point I’ll put a reminder, keep things simple, although CVM supports Network Segmentation to break out different traffic to different subnets, it does add a bit of complexity to manage.
At the cluster level, there will be some virtual IPs that can also be configured.
- AOS Cluster Virtual IP – this is only mandatory for a Hyper-V cluster, but definitely recommended for all implementations. Serves as a single IP to remember to connect to Prism, be it for an administrator, or for automation purposes. This IP floats among all the CVMs in a cluster.
- External Data Services IP – this is optional, but highly recommended to provide as well. It is the VIP for the iSCSI service to access Volume Groups. On a client, you would configure this IP as the iSCSI target. If using a backup solution that integrates with Prism, this IP will likely be needed as well. This IP also floats among all the CVMs in a cluster.
- Hyper-V Failover Cluster IP – by the name, it should be obvious it is only required for Hyper-V implementations, and will be used for the floating IP address that belongs to the Failover Cluster created for the Hyper-V hosts. Just to be clear, AHV and ESXi implementations do not need this IP.
So far, that is the list of IP addresses needed to stand up a cluster. Here’s a quick summary on the number of IP addresses needed per cluster. Do read on to understand the importance on which IP addresses need to be from the same subnet.
|at per Node level||3 IP||Host, CVM, IPMI|
|at Cluster level||Up to 3 IP||Cluster Virtual IP|
External Data Services
Hyper-V Failover Cluster
|e.g. 6 Node AHV or ESXi cluster||18 for Nodes|
2 for Cluster
|20 total IP addresses|
Above is about just getting the right number of IP addresses, however there is a requirement that Hosts and CVM must be on the same subnet. The following list attempts to explain in more details.
- Host – this is the “management” IP of the hypervisor. It will be the IP address assigned to br0 on AHV, vmk0 on ESXi and ExternalSwitch on Hyper-V. It must be on the same subnet (and VLAN) with the CVM, and all hosts in the same cluster must also be on the same subnet. For this post, let’s call this the HOST-CVM subnet.
- CVM – this is the “external” IP of the CVM. It is the IP address assigned to eth0 on the CVM. On a standard deployment, this is the interface used by CVM for just about everything, Prism access, intra cluster storage replication, cross cluster replication, iSCSI access, notifications, etc. As mentioned in the point above, this IP must belong to the HOST-CVM subnet, and all CVMs in the same cluster must belong to the same subnet.
- Cluster Virtual IP – this is the “cluster VIP” for the CVM services. While it is possible to access Prism via any CVM’s IP address, it is not always available. CVMs can be taken offline during any upgrades, and hence during those times the corresponding CVM IP will not be available. It is certainly more convenient to have a VIP which can float from CVM to CVM as needed. Since, the VIP is used by the CVMs, it must also be in the HOST-CVM subnet.
- External Data Services IP – again, being the iSCSI target IP for the cluster, and can float between CVMs, it too has to be on the same HOST-CVM subnet.
- Hyper-V Failover Cluster IP – this IP, being the VIP for Hyper-V hosts, will also have to be on the same HOST-CVM subnet.
- IPMI – is that it is known as on Nutanix NX nodes, on HPe DX it is called iLO, on DellEMC XC the iDRAC, on Lenovo HX the XClarity. The use of IPMI is optional, as nothing on the cluster depends on the external connectivity of IPMI. However I do recommend having it hooked up, more for the convenience of the administrator. At the very least to allow remote power actions and physical host console access. The IPMI does not have to be on the HOST-CVM subnet, my recommendation is to keep it on a different subnet, and even better on a disparate network. In the situation where the primary network is lost, at least there should still be some connectivity via IPMI to check on the hosts to troubleshoot. If there are constraints, and drawbacks are understood, it is possible to have IPMI on the same HOST-CVM subnet as well. Now, even if IPMI is not going to be connected, consider assigning IP addresses anyway. It future proofs that if the policy changes and IPMI is going to be connected, you would save a large effort in having to configure the IP manually; additionally, if ever there is a need to hit the IPMI using a laptop and a cross cable, there is already a defined IP address to use.
What Other IP Addresses to cater for?
For those of you who are familiar with vSphere, you probably have already called out to also allocated IP addresses for the vMotion vmkernel port. You are right, and I am putting that to the list now. It is a common practice to place vMotion traffic for vSphere on to an isolate subnet, for security and also traffic separation.
So, at this point, if you are deploying vSphere, let’s add one more IP range for vMotion, and we will need one IP per host.
Switching focus to AOS, I introduce the Network Segmentation. This feature allows us to split certain traffic type on AOS into a different virtual NIC. As of this post, with AOS 5.18, there are three possible options.
|Backplane Network||This is purely for intra cluster communications, primarily to place node-to-node storage replication to a different vNIC. Traffic on this network will not require routing.|
|iSCSI||Here, the goal is to allow the iSCSI storage to be available on a different subnet. For best performance, it is generally recommended to put iSCSI clients on the same subnet as the storage, all to avoid any IP routing. By allowing the CVMs to serve iSCSI traffic on a different subnet, it opens up more options for clients to be able to access the storage on the same subnet.|
|Cluster Replication||This is specifically for cluster to cluster storage replication. This is particular useful if there is a need to have the CVMs on a specific network that is catered for heavy cross-site replication.|
For each type of traffic segmentation, an additional vNIC will be added to each CVM, and with each vNIC, it will also have to be a different subnet. Hence, these three traffic types can introduce up to another three subnets for each cluster. As you may be able to imagine, if your environment has a requirement to use any of these added segmentations, that will be another range of IP addresses to manage.
To keep it simple, the cluster will self manage the IP assignment. In fact, you do not have a choice. In Prism, you would provide the subnet details for the traffic type, and the pool of IP the cluster can use. The system will automatically distribute the IP addresses to each CVM. It is not by means of DHCP, but simply system assigned and it is static in nature. There is no need to take note of which CVM is assigned which IP, as you should not need to care. The only exception, is for the VIP. The iSCSI and Cluster Replication traffic types will also need a VIP. You can specify what IP is used for the VIP.
The other important bit, going beyond initial deployment
It may be starting to dawn upon you that the IP address assignments are just a start. You must also make sure to plan for future cluster expansions, especially if there can be possibilities to add more Nutanix nodes to the cluster.
It should be clear by now, that all nodes, current and future must have IP addresses from within the same subnet. Hence, picking the subnet with enough spare IP addresses today, and placing a reservation on them for future use is critical. Do not rely on the fact that we can change IP for the cluster in the future. While there is a supported procedure to change IP addresses, it will require a full cluster shutdown to change IP subnets. It is a non-trivial operation.
The other alternative is to create a brand new cluster, on a different subnet. There’s no restriction on that.
While assigning IP addresses may be a simple activity, you can see there are many areas to consider:-
- Number of Nodes in a Cluster
- Is cluster providing iSCSI service?
- Hypervisor level needs – e.g. Failover Cluster IP, vMotion IP
- Any requirements for Network Segmentation?
- Maximum numbers nodes you target to grow the cluster to
It’s important to reserve IP address for immediate and future use. Look for Fortinet and get more details about IP.