[Cialug] link aggregation help requested
Hasler, Chris
ChrisHasler at alliantenergy.com
Mon Mar 25 14:55:25 UTC 2019
Hi Hakan,
I've never had much luck with Network Manager and bonding so I usually turning it off by adding the NM_CONTROLLED=no to the bond0 and each slave ifcfg files. Just realize if you do turn it off you'll need to use the CLI to edit the network interface files rather than using the GUI. I found that turning it off allowed me to make changes in the network ifcfg file and restart network service for faster troubleshooting.
What options are being set for bonding and where are they being set? I've had success using mode=802.3ad rather than mode=4 in the bond0 ifcfg file rather than in the /etc/modprobe.d/bonding.conf file but YMMV.
Example: BONDING_OPTS="mode=802.3ad miimon=10 lacp_rate=1"
Chris H.
-----Original Message-----
From: Cialug [mailto:cialug-bounces at cialug.org] On Behalf Of E. Hakan Duran
Sent: Sunday, March 24, 2019 11:43 PM
To: Central Iowa Linux Users Group
Subject: Re: [Cialug] link aggregation help requested
[This is an external email. Be cautious with links, attachments and responses.]
**********************************************************************
Thank you Dave for your reply. I actually changed the LACP rate from the default slow to fast today manually while troubleshooting this. I can easily go back to default settings since I saved the config file; however I know it will not be a solution to my problem. The other parameter I manually modified from the automatically chosen Network Manager settings was the transmit hash policy, which was changed to 2+3 from 0 if I recall correctly.
Perhaps I get this wrong. I thought that by making a bond, one would effectively increase the bandwidth of the connection, which would translate to higher data transfer such as faster file transfers, etc. Is this perception accurate? Does layer 3 hashing enable faster data transfer then? How can I figure whether my devices offer layer 3 hashing or not? The switch has a setting to modify load balance algorithm which is set at MAC address at the moment, but can be changed to IP/MAC address if desired. Does this have the potential to achieve the faster data transfer goal?
I am in the process of upgrading the firmware and will report if it causes any noticable change in behavior.
Thanks very much in advance and apologies for some probably basic/silly questions. I don't have a technical background and so computing more as a hobbyist.
Hakan
On Sunday, March 24, 2019 9:03:42 PM CDT Dave Weis wrote:
> Hello
>
> I looked at https://cstan.io/?p=8876&lang=en and there's a setting in
> the BONDING_OPTS that might be affecting it. His/her output shows an
> LACP rate of slow and yours shows an LACP rate of fast.
>
> As far as speed, a single stream connection between two hosts will
> always stay hashed to a single member of the LAG unless your devices
> offer layer 3 hashing such as port numbers. The usual method is mac
> address hashing and that's going to be the same for every connection between the same two hosts.
>
> Also make sure you have the newest firmware on your switch.
>
> dave
>
>
>
> On Sun, Mar 24, 2019 at 8:37 PM Hakan E. Duran <ehakanduran at gmail.com>
> wrote:
>
> > Dear all,
> >
> > This may be a stupid oversight as far as I know, but I could not
> > solve it myself without some help for several months now and decided
> > to ask here.
> >
> > I have a workstation that I am trying to set up link aggregation
> > using the two available on-board ethernet interfaces (motherboard is
> > Asus Z10PE-D8WS if matters). I was able to set up the bond using the
> > nm-connection-editor in linux (manjaro flavor) and also set up the
> > LAG on the Cisco managed switch I have (SG300-28, please see
> > attached). The workstation is the LAG2 whereas LAG1 and 2 are 2 NAS
> > units. As you may see on the attached image, LAG2 seems to have a
> > stand by member on port 24, instead of having 2 active members like
> > the NASes. I cannot figure out why one of the ethernet connections
> > is assigned a stanby member status. I pasted the output of the
> > command "sudo cat /proc/net/bonding /bond0" below, I am unable to
> > recognize a major problem there. The Cisco switch documentation
> > states that this would happen when there is a hardware limitation in
> > the connected device for building the aggregare (also see attached),
> > but I am not sure if there would be any such limitation since this
> > is a server main board, and also I don't want to reach that
> > conclusion before ruling out the possibility that I am missing
> > something. I can confirm that the current bond benchmarks are not
> > superior to a single ethernet connection in terms of speed at this
> > point of time (about 40-60 MBps for file transfer over Gigabit ethernet connection).
> >
> > Your pointers will be very much appreciated.
> >
> > Hakan
> >
> > sudo cat /proc/net/bonding/bond0
> > ...
> > Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
> >
> > Bonding Mode: IEEE 802.3ad Dynamic link aggregation Transmit Hash
> > Policy: layer2+3 (2) MII Status: up MII Polling Interval (ms): 100
> > Up Delay (ms): 0 Down Delay (ms): 0
> >
> > 802.3ad info
> > LACP rate: fast
> > Min links: 0
> > Aggregator selection policy (ad_select): stable System priority:
> > 65535 System MAC address: fa:aa:91:6b:bc:d2 Active Aggregator Info:
> > Aggregator ID: 1
> > Number of ports: 2
> > Actor Key: 9
> > Partner Key: 1001
> > Partner Mac Address: 00:38:df:d0:ee:49
> >
> > Slave Interface: enp6s0
> > MII Status: up
> > Speed: 1000 Mbps
> > Duplex: full
> > Link Failure Count: 1
> > Permanent HW addr: 2c:fd:a1:c6:1f:19 Slave queue ID: 0 Aggregator
> > ID: 1 Actor Churn State: monitoring Partner Churn State: monitoring
> > Actor Churned Count: 0 Partner Churned Count: 0 details actor lacp
> > pdu:
> > system priority: 65535
> > system mac address: fa:aa:91:6b:bc:d2
> > port key: 9
> > port priority: 255
> > port number: 1
> > port state: 7
> > details partner lacp pdu:
> > system priority: 1
> > system mac address: 00:38:df:d0:ee:49
> > oper key: 1001
> > port priority: 1
> > port number: 71
> > port state: 117
> >
> > Slave Interface: enp5s0
> > MII Status: up
> > Speed: 1000 Mbps
> > Duplex: full
> > Link Failure Count: 1
> > Permanent HW addr: 2c:fd:a1:c6:1f:18 Slave queue ID: 0 Aggregator
> > ID: 1 Actor Churn State: monitoring Partner Churn State: monitoring
> > Actor Churned Count: 0 Partner Churned Count: 0 details actor lacp
> > pdu:
> > system priority: 65535
> > system mac address: fa:aa:91:6b:bc:d2
> > port key: 9
> > port priority: 255
> > port number: 2
> > port state: 7
> > details partner lacp pdu:
> > system priority: 1
> > system mac address: 00:38:df:d0:ee:49
> > oper key: 1001
> > port priority: 1
> > port number: 72
> > port state: 69
> > _______________________________________________
> > Cialug mailing list
> > Cialug at cialug.org
> > https://www.cialug.org/cgi-bin/mailman/listinfo/cialug
> >
> _______________________________________________
> Cialug mailing list
> Cialug at cialug.org
> https://www.cialug.org/cgi-bin/mailman/listinfo/cialug
>
More information about the Cialug
mailing list