Full Duplex DOCSIS® 3.1- The Evolution of DOCSIS

Full Duplex DOCSIS® 3.1 is an extension of the DOCSIS 3.1 specification that will significantly increase upstream capacity and enable symmetric multi-Gbps services over existing HFC networks. Full Duplex DOCSIS 3.1 technology builds on the successful completion of the DOCSIS 3.1 specification, which has made deployments of 10 Gbps downstream and 1 Gbps upstream broadband speeds a reality.

In Full Duplex communication, the upstream and downstream traffic concurrently use the same spectrum, doubling the efficiency of spectrum use. With current DOCSIS networks and TDD (Time Division Duplexing), the spectrum is split between the upstream and downstream. Full Duplex communication enables upstream and downstream traffic to use the same spectrum at the same time efficiently.

DOCSIS 1.0 DOCSIS 1.1 DOCSIS 2.0 DOCSIS 3.0 DOCSIS 3.1 FULL DUPLEX
DOCSIS 3.1
Highlights Initial cable broadband technology Added voice over IP service Higher upstream speed Greatly enhances capacity Capacity and efficiency progression Symmetrical streaming and increased upload speeds
Downstream Capacity 40 Mbps 40 Mbps 40 Mbps 1 Gbps 10 Gbps 10 Gbps
Upstream Capacity 10 Mbps 10 Mbps 30 Mbps 100 Mbps 1-2 Gbps 10 Gbps
Production Date 1997 2001 2002 2008 2016 TBD

TDD Frequencies Band & Spectrum

LTE BAND NUMBER

ALLOCATION (MHz)

Wideth of band(MHz)

33

1900 – 1920

20

34

2010 – 2025

15

35

1850 – 1910

60

36

1930 – 1990

60

37

1910 – 1930

20

38

2570 – 2620

50

39

1880 – 1920

40

40

2300 – 2400

100

41

2496 – 2690

194

42

3400 – 3600

200

43

3600 – 3800

200

Optimising Xen

Xenserver network performance can be a little unoptimised out of the box compared to products such as VMware, but there are a few easy steps you can take to bump up the speed.

Disable TCP Checksum offload

If you notice a drop in network performance you can do a dump of the network traffic to see if you are getting checksum mismatching/checksum errors then you can manually disable TCP checksum offloading for the hosts network adapter.

To create a dump run this command:

tcpdump -i eth0 -v -nn | grep incorrect

The main line here you are looking out for is “incorrect (-> 0x6e35)”. This error is showing that checksum is failing to receive, which means that the server is having TCP offloading issues. This can be easily fixed by identifying what has been marked as active on the network interface card for offloading. If you have multiple NIC’s you will need to do this for each one:

ethtool -k ethX
Output:
Offload parameters for eth0:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp-segmentation-offload: on
udp-fragmentation-offload: off
generic-segmentation-offload: on
generic-receive-offload: off
large-receive-offload: off
  • Scatter-Gather I/O – Rather than passing one large buffer, small buffers are passed which makes up large buffers. This provides more efficiency than large buffers passed.
  • TCP Segmentation Offload – It is the ability to frame data according to MTU size & same IP header with all packets. Useful when buffer is much larger than MTU on the link. The segmentation into smaller size is offloaded to NIC.
  • Generic Segmentation Offload – This is used to postpone the segmentation as long as possible. This performs the segmentation just before the entry into the driver’s xmit routine. GSO & TSO are only significantly effective only when MTU is much less than buffer size.
  • Generic Receive Offload – GSO only works for transmission of packets. This allows the packets to be re-fragmented at output. Unlike LRO which merges every packets, GRO merges with restriction keeping important fields in packet intact. NAPI API polls for new packets and process packets in batches before passing it to OS.
  • Large Receive Offload – This is used for combining multiple incoming packets into single buffer before passing it up to OS stack. Benefits of this is OS sees fewer packets & uses less CPU time.

Depending upon your NIC vendor, names of these processes may vary. Some vendors do provide additional offload processes.

Now you can run below script to disable TCP offload:

if [[ "$1" == "--local" || "$1" == "-l" ]]; then
    echo -n "disabling checksum offloading for local devices... "
    for iface in $(ifconfig | awk '$0 ~ /Ethernet/ { print $1 }'); do
        for if_mode in ${if_modes}; do
          ethtool -K $iface $if_mode off 2>/dev/null
        done
    done
    echo "done."
else
    echo -n "disabling checksum offloading in xapi settings... "
    for VIF in $(xe vif-list --minimal | sed -e 's/,/ /g')
    do
        ###xe vif-param-clear uuid=$VIF param-name=other-config
        for if_mode in ${if_modes}; do
            xe vif-param-set uuid=$VIF other-config:ethtool-${if_mode}="off"
        done
    done
    for PIF in $(xe pif-list --minimal | sed -e 's/,/ /g')
    do
        ###xe pif-param-clear uuid=$PIF param-name=other-config
        for if_mode in ${if_modes}; do
            xe pif-param-set uuid=$PIF other-config:ethtool-${if_mode}="off"
        done
    done
    echo "done."
fi

Powering off a virtual machine on an ESXi host from CLI

In ESXi 4.x and above, you can use the k command in esxtop to send a signal to, and kill, a running virtual machine process.

  1. On the ESXi console, enter Tech Support mode and log in as root.
  2. Run the esxtop utility by running this command:esxtop
  3. Press c to switch to the CPU resource utilization screen.
  4. Press Shift+v to limit the view to virtual machines. This may make it easier to find the Leader World ID in step 7.
  5. Press f to display the list of fields.
  6. Press c to add the column for the Leader World ID.
  7. Identify the target virtual machine by its Name and Leader World ID (LWID).
  8. Press k.
  9. At the World to kill prompt, type in the Leader World ID from step 6 and press Enter.
  10. Wait 30 seconds and validate that the process is not longer listed.