Jumbo Frames, LACP, NFS

Now that my server is basically up and running for a while it’s time to get the neat stuff running 🙂 .

A to-do item on my list has always been enabling jumbo frames and LACP on my Opensolaris installation. So, let’s get this kicking!

Jumbo frames use a MTU of 9000 instead of 1500. Since I’ve aggregated nge0 and nge1 to aggr1 I can simply set the MTU of the aggr and the corresponding values will automatically be set on nge0 and nge1 (even if you don’t see them 😉 ):

ifconfig aggr1 unplumb
dladm set-linkprop -p mtu=9000 aggr1
ifconfig aggr1 plumb

Sweet stuff! Don’t forget to re-configure the address, netmask and gateway for the link. Now I want to enable LACP so I can make use of the combined throughput of the NICs:

dladm modify-aggr -L passive 1

Let’s check whether everything’s up and running as intended:

tsukasa@filesrv:/$ ifconfig aggr1
aggr1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 9000 index 2
inet 192.168.X.X netmask ffffff00 broadcast 192.168.X.X
tsukasa@filesrv:/$ dladm show-aggr 1
LINK            POLICY   ADDRPOLICY           LACPACTIVITY  LACPTIMER   FLAGS
aggr1           L4       auto                 passive       short       -----

And indeed, everything went smooth.

Now, even with this boost in performance watching HD video over the network sometimes stuttered with many concurrent connections/operations going up- and downstream at the same time. Also, the Opensolaris 2009.06’s CIFS service has the unpleasant habit of dying and refusing to come up again until I reboot, so I went the easy way and simply enabled NFS for this ZFS system (can we call this a system? Mount? I don’t know…):

zfs set sharenfs=on filesrv/video

I didn’t have to do anything else or specify explicit rw options since I already mapped all the stuff before. Now, the only thing left to do:

mount filesrv:/filesrv/video /home/tsukasa/video -t nfs -o defaults,users,noauto,rsize=8192,wsize=8192,timeo=14,intr

And guess what, no more problems 🙂 .

Published by

tsukasa

The fool's herald.

4 thoughts on “Jumbo Frames, LACP, NFS”

  1. Hello, did You reached more than 120 MB/s over NFS raw transfer?
    I just configured Intel PRO/1000 F Server Adapter (installed in standard PCI slot) with Planet GSD-802S to work in LACP aggregated link combined with standard Ethernet gigabit onboard Nvidia card. Looks sweet, I have such setup already but with 2 x PCI-Express server single NICs.

    Now I’m wondering how to reach 250MB/s with aggregated link over NFS between server and client, let say that both will have such setup:

    Ethernet Channel Bonding Driver: v3.5.0 (November 4, 2008)

    Bonding Mode: IEEE 802.3ad Dynamic link aggregation
    Transmit Hash Policy: layer2 (0)
    MII Status: up
    MII Polling Interval (ms): 100
    Up Delay (ms): 200
    Down Delay (ms): 200

    802.3ad info
    LACP rate: fast
    Aggregator selection policy (ad_select): bandwidth
    Active Aggregator Info:
    Aggregator ID: 2
    Number of ports: 2
    Actor Key: 17
    Partner Key: 3

  2. Did You figured this out?
    my Win7 box has LACP bridged 2 networks using Intel driver (port teaming). In statistics I see increasing bytes on input on card1 and output on card2 – using both, so it works.
    But LACP on my Linux box sasy it’s using only one port, while second is standby??
    Disabled RSTP, Flow Control – same situation.

  3. TooMeek, sorry for the late reply! I was digging through my old documentation (or what’s left of it) in order to give you a good reply. Unfortunately, there isn’t really that much I can tell you.

    NFS performance with LACP and bonding on Linux was kind of sketchy on my setup. I used 2 nForce gigabit ports that came on my mainboard and quickly noticed that there are a few problems (driver caused hickups under load, causing the adapter to go down – a known problem at the time), causing me to switch to extension cards from Intel later on.

    If you plumbed your bond0 adapter on Linux and the mode for the bond is set to 4, you should absolutely see results. If you cannot set the MTU for your adapters (possible if the driver doesn’t support it – like it was the case for my old nForce adapters, plus the lack of ethtool support) all you can effectively use is mode 0 or balancing-rr for comparison against single-port setups.

    Personally I would recommend not to use adapters exclusively for sending or receiving but to go with the load-balancing/round-robin approach that loops through the adapters, let the switch do the rest.

    Performance-wise I never peaked at 250M/s because even though my raid delivers speeds up to ~260M/s, I reach these speeds only when reading large chunks of data. Real-world performance is a different beast, though. It’s definitely an improvement over single gigabit and worth the trouble, though – especially if you have multiple clients accessing files at the same time.

    The rsize and wsize parameters for your mount also make a huge difference. Although the protocol is supposed to set these values itself, I always found it more convenient to enforce them manually. But I probably only tell you things you already know.

    I can’t really talk about the client-side anymore, though. I switched machines a good while ago and didn’t have any issues ever since that would puash me to bond adapters again. Performance isn’t stellar but good enough and I never pin-pointed whether this had something to do with changes made in Solaris > snv_140 or the new hardware. With encryption and other goodies active on my server today, the bottleneck is no longer the network connection…

  4. Switched to round-robin now. Testing bonding + bridging.
    Looks like bridge is limiting bandwidth to 1 NIC.

Leave a Reply to TooMeeKCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.