Switch Stacking : Overview, Configuration & FAQ

NADDOD Neo Switch Specialist Jan 10, 2023

Switch stacking is one of the important functions of a switch, which refers to combining more than one switch to work together in order to provide as many ports as possible in a limited space. Multiple switches are stacked to form a stacking unit, thus greatly increasing the capacity of the network.

Switch stacking
Stacking switches together can optimize network performance and is a scalable and flexible network solution. With stacking, multiple switches can be managed centrally, greatly simplifying the management workload, especially in data centers or IT rooms. Users can add or subtract switches to the stack unit as needed without affecting the performance of the entire network; if a link fails in the stack, the other switches can still continue to work.

stacking 8

How Does Switch Stacking Work?

Switch stacking can be achieved with DAC high-speed cables, optical modules or cables specifically designed for stacking. Switch stacking includes stacking primary switches and stacking backup switches. Typically, the switches in a stack other than the stacking master switch are called stacking backup switches. The stacking master switch is the core switch that manages the stacking backup switches, and it stores the operational configuration files for the entire switch stack. The user can log in to the stacking system through this stacking master switch and perform unified configuration and management of all switches in the stacking system. If the primary switch fails, the stacking system selects a new stacking master switch from the backup switch without affecting the performance of the entire network.

The number of tiers supported for stacking varies by switch brand. However, no matter how many switches are stacked together, the system assigns a stacking master switch to control the entire stacking system, and users can perform operations in the stacking master switch to manage and maintain the switch stacks.

Typical Stacking Topologies

Chain topology and ring topology are two common types of stacking topologies, both of which have their own advantages and disadvantages.

In a chain topology, there is no need for a physical connection at the beginning and end, making it suitable for long distance stacking. However, if one of the stacking links fails, it will cause the stacking to split. And there is only one path in the whole stacking system, so the bandwidth utilization of stacking links in this stacking topology is low. When the stacking member switches are far away, it is more difficult to form a ring connection, and a chain connection can be used.

And in the ring topology, the head and tail need to be physically connected, which is not suitable for long-distance stacking when using DAC high-speed cables or short-distance stacking cables. This stacking method is more reliable. If one of the stacking links fails, the ring topology will become a chain topology, which does not affect the normal operation of the stacking system. In addition, the stacking link has high bandwidth utilization and data can be forwarded according to the shortest path.

How Should Switch Stacking be Configured?

In general, the switch stacking configuration steps are as follows.

  1. In case of power failure, complete the stacking connection of switches through DAC/AOC high-speed cables or optical modules and patch cables. Note: The number of stacked switches should not exceed the maximum number of stacked switches.
  2. Turn on the power sequentially to complete the stacking configuration of the switches.
  3. After all switches are configured, the switch reboots and assigns the role of each stack member.
  4. After the reboot is completed, only the master stacking switch has configuration privileges to check the interface information, and the master switch will display all interfaces.

Frequently Asked Questions on Switch Stacking

To help you better understand stacking, here is a collection of frequently asked questions and answers in switch stacking.

  1. Virtual Stacking and Physical Stacking
    Physical stacking is done through a special stacking cable and stacking port, the coupling is relatively strong, and the relative stability is higher. However, the technical barrier of this stacking method is high, and the stacking cable must be provided by the original factory, which is relatively expensive; the stacking cable is generally short, and cannot complete the stacking in the case of long distance.

    Virtual stacking is done through the common service port of the switch, the coupling is relatively weak, and the stability is relatively weak, but the stacking cable has more space to choose, more cost-effective, and the stacking distance is not limited, so it can meet more stacking scenarios. In summary, virtual stacking brings a lot of convenience to users in terms of cost and management, and will not be limited by the length of the distance, compared to physical stacking is a relatively good stacking method.
  2. Stacked vs. Chassis Switches
    Both stacked and chassis switches provide more ports and are managed through a single device. However, both types have their advantages and disadvantages.

    A chassis switch is a network switch that can be used by plugging different types of line cards into fixed slots on the switch. Unlike a stacked switch that becomes a stacked whole by connecting stacked cables, a chassis switch does not require multiple switches to be connected because of the fixed module slots on the switch. However, the upfront investment for using a chassis switch is much higher than that of a stacked switch. In terms of upfront investment cost, the cost of using switch stacking is lower, and using stacking switches can meet the application scenario of long-distance stacking in more areas.
  3. Switch Stacking and MLAG
    MLAG refers to cross-device link aggregation, which is a technology that combines multiple physical links into a single logical link and offers the advantages of high availability and high throughput. Both stacking and MLAG can provide link redundancy. The former is commonly used in the enterprise network access layer, with the advantage of easier management and lower O&M costs; the latter is usually used in the data center access layer, with the advantage of relatively less configuration, higher ROI, and the ability to increase redundancy to the device level.
  4. Switch Stacking, Cascading, and Clustering
    Switch stacking, cascading, and clustering are all common techniques for interconnecting multiple switches.

    ● Cascading can expand the number of ports and increase the capacity of a device by connecting switches through multiple ports. In principle, cascading can be done between switches of any network equipment manufacturer. However, compared to cascading, the switches that are cascaded to each other are logically independent, and each switch needs to be configured and managed in turn. The additional ports added to a switch after cascading share the total backplane bandwidth of the switch with the previous ports, but the cascaded switches do not.

    ● Cascading allows you to expand the number of ports by connecting switches with multiple ports, increasing the capacity of the device. In principle, cascading can be done between switches of any network equipment manufacturer. However, in contrast to cascading, the switches are logically independent of each other, and each switch needs to be configured and managed in turn. The additional ports added to a switch after cascading share the total backplane bandwidth of the switch with the previous ports, but the cascaded switches do not.

    ● Clustering is when multiple interconnected (cascaded or stacked) switches are managed as one logical device. In a switch cluster, there is generally only one switch that plays a management role, called the command switch, which can manage the other switches. In the network, these switches only need to occupy one IP address (required only for the command switch). Under the unified management of the command switch, multiple switches in the cluster work together, greatly reducing management intensity.
  5. Switch Stacking, Uplink, and Trunk Link
    An uplink is a connection from one switch to another through an uplink port, and it essentially adds no bandwidth. However, switches can be interconnected by supporting uplinks from switches of different manufacturers’ models, which also offers great flexibility.

    A trunk link is a connection between two Layer 2 switches and is ideal for passing VLAN information between switches. It is often used to build internal networks that contain LANs, VLANs, and WANs, which allows data on multiple VLANs to pass through the same port and keep them separate from each other.

    Generally, switch stacking can provide more bandwidth while simplifying network management.