Согласно документации рекомендуется использовать 4 и более сетевых интерфейса для каждой ноды кластера. Однако из-за конструктивных особенностей некоторых серверов не всегда есть возможность установить дополнительные интерфейсы - например на блейд-серверах. В статье будет рассмотрены некоторые особенности конфигурирования SunCluster на серверах с двумя сетевыми интерфейсами.
И так, после инсталяции Solaris и SunCluster имеется следующая конфигурация:
root@node1 #
root@node1 # dladm show-dev 
e1000g0         link: up        speed: 1000  Mbps       duplex: full 
e1000g1         link: unknown   speed: 0     Mbps       duplex: half 
root@node1 # dladm show-link 
e1000g0         type: non-vlan  mtu: 1500       device: e1000g0 
e1000g1         type: non-vlan  mtu: 1500       device: e1000g1 
root@node1 # ifconfig -a 
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 
        inet 127.0.0.1 netmask ff000000  
e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 
        inet 192.168.56.11 netmask ffffff00 broadcast 192.168.56.255 
        ether 8:0:27:0:49:75  
root@node1 #  
Запускаем scinstall и при попытке конфигурировани интерконнекта получаем:
  >>> Cluster Transport Adapters and Cables <<< 
 
    You must configure the cluster transport adapters for each node in the 
    cluster. These are the adapters which attach to the private cluster  
    interconnect. 
 
    Select the first cluster transport adapter: 
 
        1) e1000g1 
        2) Other 
 
    Option:  2 
 
    What is the name of the first cluster transport adapter (help)?  e1000g0 
 
Adapter "e1000g0" is already in use as a public network adapter.
Для решения этой проблемы производим реконфигурацию сетевых интерфейсов:
root@node1 # ifconfig e1000g0 down unplumb 
root@node1 # ifconfig e1000g10000 plumb 
root@node1 # ifconfig e1000g10000 192.168.56.11/24 up 
root@node1 # ifconfig e1000g11000 plumb 
root@node1 # ifconfig e1000g11000 192.168.57.11/24 up 
root@node1 # ifconfig e1000g12001 plumb 
root@node1 # ifconfig e1000g12001 192.168.58.11/24 up 
root@node1 #  
root@node1 # dladm show-dev 
e1000g0         link: up        speed: 1000  Mbps       duplex: full 
e1000g1         link: up        speed: 1000  Mbps       duplex: full 
root@node1 # dladm show-link 
e1000g0         type: non-vlan  mtu: 1500       device: e1000g0 
e1000g10000     type: vlan 10   mtu: 1500       device: e1000g0 
e1000g11000     type: vlan 11   mtu: 1500       device: e1000g0 
e1000g1         type: non-vlan  mtu: 1500       device: e1000g1 
e1000g12001     type: vlan 12   mtu: 1500       device: e1000g1 
root@node1 # ifconfig -a 
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 
        inet 127.0.0.1 netmask ff000000  
e1000g10000: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 3 
        inet 192.168.56.11 netmask ffffff00 broadcast 192.168.56.255 
        ether 8:0:27:0:49:75  
e1000g11000: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 4 
        inet 192.168.57.11 netmask ffffff00 broadcast 192.168.57.255 
        ether 8:0:27:0:49:75  
e1000g12001: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 5 
        inet 192.168.58.11 netmask ffffff00 broadcast 192.168.58.255 
        ether 8:0:27:d2:7b:1f  
root@node1 # 
Для других нод кластера выполняем аналогичные действия.
Кроме этого необходимо соответствующим образом настроить активное сетевое оборудование. Далее проверяем сетевую конфигурацию (ххх.ххх.ххх.12 - адреса сетевых интерфейсов другой ноды в соответствующих VLAN-ах):
root@node1 # ping 192.168.56.12 
192.168.56.12 is alive 
root@node1 # ping 192.168.57.12 
192.168.57.12 is alive 
root@node1 # ping 192.168.58.12 
192.168.58.12 is alive 
Сохряняем настройки для public интерфейса
root@node1 # mv /etc/hostname.e1000g0 /etc/hostname.e1000g10000 
и не забываем расконфигурировать интерфейсы для интерконнекта (интерфейсы должны быть unplumb).
root@node1 # ifconfig e1000g11000 down unplumb 
root@node1 # ifconfig e1000g12001 down unplumb 
Заново запускаем scinstall и конфигурируем интерконнект:
  >>> Cluster Transport Adapters and Cables <<< 
 
    You must configure the cluster transport adapters for each node in the 
    cluster. These are the adapters which attach to the private cluster  
    interconnect. 
 
    Select the first cluster transport adapter: 
 
        1) e1000g0 
        2) e1000g1 
        3) Other 
 
    Option:  1 
 
    This adapter is used on the public network also, you will need to  
    configure it as a tagged VLAN adapter for cluster transport. 
 
    What is the cluster transport VLAN ID for this adapter?  11 
 
    Searching for any unexpected network traffic on "e1000g11000" ... done 
    Verification completed. No traffic was detected over a 10 second  
    sample period. 
 
    Select the second cluster transport adapter: 
 
        1) e1000g0 
        2) e1000g1 
        3) Other 
 
    Option:  2 
 
    Will this be a dedicated cluster transport adapter (yes/no) [yes]?  no 
 
    What is the cluster transport VLAN ID for this adapter?  12 
 
    Searching for any unexpected network traffic on "e1000g12001" ... done 
    Verification completed. No traffic was detected over a 10 second  
    sample period. 
 
    Plumbing network address 172.16.0.0 on adapter e1000g0 >> NOT DUPLICATE ... done
    Plumbing network address 172.16.0.0 on adapter e1000g1 >> NOT DUPLICATE ... done
Проверяем, что scinstall правильно сгенерил команду конфигурации кластера
  >>> Confirmation <<< 
 
    Your responses indicate the following options to scinstall: 
 
      scinstall -i \  
           -C test \  
           -F \  
           -G lofi \  
           -T node=node1,node=node2,authtype=sys \  
           -w netaddr=172.16.0.0,netmask=255.255.240.0,maxnodes=64,maxprivatenets=10,numvirtualclusters=12 \  
           -A trtype=dlpi,name=e1000g0,vlanid=11 -A trtype=dlpi,name=e1000g1,vlanid=12 \  
           -B type=switch,name=switch1 -B type=switch,name=switch2 \  
           -m endpoint=:e1000g0,endpoint=switch1 \  
           -m endpoint=:e1000g1,endpoint=switch2 \  
           -P task=quorum,state=INIT 
 
    Are these the options you want to use (yes/no) [yes]?   
 
    Do you want to continue with this configuration step (yes/no) [yes]?   
и если да - то подтверждаем ее.
После перезагрузки первой ноды приступаем у конфигурации второй ноды. Запускаем scinstall и при конфигурации интерконнекта соглашаемся на autodiscovery
  >>> Autodiscovery of Cluster Transport <<< 
 
    If you are using Ethernet or Infiniband adapters as the cluster  
    transport adapters, autodiscovery is the best method for configuring  
    the cluster transport. 
 
    Do you want to use autodiscovery (yes/no) [yes]?   
 
 
    Probing ....  
 
    The following connections were discovered: 
 
        node1:e1000g0  switch1  node2:e1000g0 [VLAN ID 11] 
        node1:e1000g1  switch2  node2:e1000g1 [VLAN ID 12] 
 
    Is it okay to configure these connections (yes/no) [yes]?   
Если по каким-то причинам autodiscovery отработала некорректно - задаем конфигурацию самостоятельно с указанием интерфейсов и соответствующих VLAN-ов. Проверяем команду конфигурирования
  >>> Confirmation <<< 
 
    Your responses indicate the following options to scinstall: 
 
      scinstall -i \  
           -C test \  
           -N node1 \  
           -G lofi \  
           -A trtype=dlpi,name=e1000g0,vlanid=11 -A trtype=dlpi,name=e1000g1,vlanid=12 \  
           -m endpoint=:e1000g0,endpoint=switch1 \  
           -m endpoint=:e1000g1,endpoint=switch2 
 
    Are these the options you want to use (yes/no) [yes]?   
 
    Do you want to continue with this configuration step (yes/no) [yes]?  
и если все правильно - подтверждаем.
После перезагрузки второй ноды она должна быть подключенной к кластеру. Проверяем состояние интерфейсов и конфигурацию кластера:
root@node1 # dladm show-dev 
e1000g0         link: up        speed: 1000  Mbps       duplex: full 
e1000g1         link: up        speed: 1000  Mbps       duplex: full 
clprivnet0              link: unknown   speed: 0     Mbps       duplex: unknown 
root@node1 # dladm show-link 
e1000g0         type: non-vlan  mtu: 1500       device: e1000g0 
e1000g10000     type: vlan 10   mtu: 1500       device: e1000g0 
e1000g11000     type: vlan 11   mtu: 1500       device: e1000g0 
e1000g1         type: non-vlan  mtu: 1500       device: e1000g1 
e1000g12001     type: vlan 12   mtu: 1500       device: e1000g1 
clprivnet0      type: legacy    mtu: 1486       device: clprivnet0 
root@node1 # ifconfig -a 
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 
        inet 127.0.0.1 netmask ff000000  
e1000g10000: flags=209000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER,CoS> mtu 1500 index 2 
        inet 192.168.56.11 netmask ffffff00 broadcast 192.168.56.255 
        groupname sc_ipmp0 
        ether 8:0:27:0:49:75  
e1000g11000: flags=201008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4,CoS> mtu 1500 index 4 
        inet 172.16.0.129 netmask ffffff80 broadcast 172.16.0.255 
        ether 8:0:27:0:49:75  
e1000g12001: flags=201008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4,CoS> mtu 1500 index 3 
        inet 172.16.1.1 netmask ffffff80 broadcast 172.16.1.127 
        ether 8:0:27:d2:7b:1f  
clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 5 
        inet 172.16.4.1 netmask fffffe00 broadcast 172.16.5.255 
        ether 0:0:0:0:0:1  
root@node1 # clinterconnect show 
 
=== Transport Cables ===                        
 
Transport Cable:                                node1:e1000g11000,switch1@1 
  Endpoint1:                                       node1:e1000g11000 
  Endpoint2:                                       switch1@1 
  State:                                           Enabled 
 
Transport Cable:                                node1:e1000g12001,switch2@1 
  Endpoint1:                                       node1:e1000g12001 
  Endpoint2:                                       switch2@1 
  State:                                           Enabled 
 
Transport Cable:                                node2:e1000g11000,switch1@2 
  Endpoint1:                                       node2:e1000g11000 
  Endpoint2:                                       switch1@2 
  State:                                           Enabled 
 
Transport Cable:                                node2:e1000g12001,switch2@2 
  Endpoint1:                                       node2:e1000g12001 
  Endpoint2:                                       switch2@2 
  State:                                           Enabled 
 
 
=== Transport Switches ===                      
 
Transport Switch:                               switch1 
  State:                                           Enabled 
  Type:                                            switch 
  Port Names:                                      1 2 
  Port State(1):                                   Enabled 
  Port State(2):                                   Enabled 
 
Transport Switch:                               switch2 
  State:                                           Enabled 
  Type:                                            switch 
  Port Names:                                      1 2 
  Port State(1):                                   Enabled 
  Port State(2):                                   Enabled 
 
 
--- Transport Adapters for node1 ---            
 
Transport Adapter:                              e1000g11000 
  State:                                           Enabled 
  Transport Type:                                  dlpi 
  device_name:                                     e1000g 
  device_instance:                                 0 
  lazy_free:                                       1 
  dlpi_heartbeat_timeout:                          10000 
  dlpi_heartbeat_quantum:                          1000 
  nw_bandwidth:                                    80 
  bandwidth:                                       70 
  vlan_id:                                         11 
  ip_address:                                      172.16.0.129 
  netmask:                                         255.255.255.128 
  Port Names:                                      0 
  Port State(0):                                   Enabled 
 
Transport Adapter:                              e1000g12001 
  State:                                           Enabled 
  Transport Type:                                  dlpi 
  device_name:                                     e1000g 
  device_instance:                                 1 
  lazy_free:                                       1 
  dlpi_heartbeat_timeout:                          10000 
  dlpi_heartbeat_quantum:                          1000 
  nw_bandwidth:                                    80 
  bandwidth:                                       70 
  vlan_id:                                         12 
  ip_address:                                      172.16.1.1 
  netmask:                                         255.255.255.128 
  Port Names:                                      0 
  Port State(0):                                   Enabled 
 
 
--- Transport Adapters for node2 ---            
 
Transport Adapter:                              e1000g11000 
  State:                                           Enabled 
  Transport Type:                                  dlpi 
  device_name:                                     e1000g 
  device_instance:                                 0 
  vlan_id:                                         11 
  lazy_free:                                       1 
  dlpi_heartbeat_timeout:                          10000 
  dlpi_heartbeat_quantum:                          1000 
  nw_bandwidth:                                    80 
  bandwidth:                                       70 
  ip_address:                                      172.16.0.130 
  netmask:                                         255.255.255.128 
  Port Names:                                      0 
  Port State(0):                                   Enabled 
 
Transport Adapter:                              e1000g12001 
  State:                                           Enabled 
  Transport Type:                                  dlpi 
  device_name:                                     e1000g 
  device_instance:                                 1 
  vlan_id:                                         12 
  lazy_free:                                       1 
  dlpi_heartbeat_timeout:                          10000 
  dlpi_heartbeat_quantum:                          1000 
  nw_bandwidth:                                    80 
  bandwidth:                                       70 
  ip_address:                                      172.16.1.2 
  netmask:                                         255.255.255.128 
  Port Names:                                      0 
  Port State(0):                                   Enabled 
 
root@node1 # clinterconnect status 
 
=== Cluster Transport Paths === 
 
Endpoint1               Endpoint2               Status 
---------               ---------               ------ 
node1:e1000g12001       node2:e1000g12001       Path online 
node1:e1000g11000       node2:e1000g11000       Path online 
 
Далее остается настроить IPMP для public сети используя интрефейс e1000g10001 и можно считать, что по крайней мере сетевые ресурсы кластера мы сконфигурировали.
Таким образом, для конфигурации SunCluster на серверах с двумя сетевыми интерфейсами нам потребовалось три VLAN-а - один для public сети и два для интерконнекта. Public сеть должна быть переведена в VLAN до запуска scinstall.