How to Install Ruby on Rails on Raspberry Pi 3

Type the following command into the Terminal window, once you’ve connected via SSH.

If you come across a 404 Error, you’ll likely need to update the package index, and this can be done using the following command.

Whilst you’re already getting the required packages, you’ll need to retrieve the SSL package, SQL database package, and more.

Open up the RVM, straight from its repository on GitHub. ( sudo gpg2 –recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3)

Once you’ve successfully installed the required packages, and have opened up the RVM, it’s recommended that you run a script, just so that you can enable Ruby.

You should now have successfully installed Ruby, and Rails. You can test for Ruby by typing the following command.

If installed correctly, you’ll see a message confirming which version of Ruby is installed, when it was produced, and what it’s using in order to work correctly.

You can also test for Rails by typing in the following command.

HP’s Ethernet Virtual Interconnect (EVI) vs VPLS and Cisco’s OTV

chart: comparing HP EVI with Cisco OTV and VPLS

 

HP EVI is a MAC-over-GRE-over-IP solution. Ethernet frames are encapsulated into GRE/IP at ingress to the switch. The GRE/IP packets are then routed over the WAN connection between the data centers.

EVI adds a software process to act as control plane to distribute the MAC addresses in each VLAN between the EVI-enabled switch. Thus, the switch in data center A updates the MAC address table in data center B and vice versa. By contrast, in traditional use, Ethernet MAC addresses are auto-discovered as frames are received by the switch.

Because HP has chosen to use point-to-point GRE, the EVI edge switch must perform packet replication. Ethernet protocols such as ARP rely heavily on broadcasts to function. In a two-site network this isn’t problem, but for three sites or more, the EVI ingress switch needs to replicate a broadcast EVI frame to every site. HP assures me that this can be performed at line rate, for any speed, for any number of data centers. That may be so, but creating full mesh replication for n* (n-1) WAN circuits could result in poor bandwidth utilization in networks that have high volumes of Ethernet broadcasts.

Cisco’s OTV is also MAC-over-GRE-over-IP (using EoMPLS headers), but it adds a small OTV label into the IP header. The OTV control plane acts to propagate the MAC address routing table.

Like HP’s EVI, OTV can complicate load balancing. Cisco’s Virtual Port Channel (vPC) shares the control plane, while HP’s IRF shares the data plane. Although a vPC-enabled pair of Nexus 7000 switches run as autonomous control planes, NX-OS can load balance evenly using IP. OTV load balances by using a 5-tuple hash and will distribute traffic over multiple paths for the WAN.

OTV also supports the use of multicast routing in the WAN to deliver a much more efficient replication of Ethernet broadcasts in large-scale environments. Instead of meshing a large DCI core, a Source Specific Multicast (with good reasons) should be more efficient for multiple sites. Badly designed applications, such as Microsoft NLB, will be much more efficient using multicast.

For many enterprises, MPLS is not a consideration. MPLS is a relatively complex group of protocols that requires a fair amount of time to learn and comprehend. However, building mission-critical business services that aren’t MPLS is really hard. Service providers can offer L2 DCI using their MPLS networks with VPLS. Operationally, enterprise infrastructure is diverse and customised to each use case. Service provider networks tend toward homogeneity and simplicity because of the scale.

Some enterprises will buy managed VPLS services from service providers. They will also discover that such VPLS services are of variable quality, offer poor loop prevention, and can be expensive and inefficient. (For more, see the above-referenced report.) This is what drives Cisco and HP to deliver better options in OTV and EVI.

 

WeMos D1 R2 Access Point setup

install Windows USB to UART driver (CH340)

Connect your ESP8266 to your computer.

Open your Arduino program/IDE and paste the following code:

#include <ESP8266WiFi.h>

WiFiServer server(80); //Initialize the server on Port 80

void setup() {

WiFi.mode(WIFI_AP); //Our ESP8266-12E is an AccessPoint
WiFi.softAP(“ESP8266-AP“, “A1B2C3D4“); // Provide the (SSID, password); .
server.begin(); // Start the HTTP Server

}

void loop() { }

 

Full Duplex DOCSIS® 3.1- The Evolution of DOCSIS

Full Duplex DOCSIS® 3.1 is an extension of the DOCSIS 3.1 specification that will significantly increase upstream capacity and enable symmetric multi-Gbps services over existing HFC networks. Full Duplex DOCSIS 3.1 technology builds on the successful completion of the DOCSIS 3.1 specification, which has made deployments of 10 Gbps downstream and 1 Gbps upstream broadband speeds a reality.

In Full Duplex communication, the upstream and downstream traffic concurrently use the same spectrum, doubling the efficiency of spectrum use. With current DOCSIS networks and TDD (Time Division Duplexing), the spectrum is split between the upstream and downstream. Full Duplex communication enables upstream and downstream traffic to use the same spectrum at the same time efficiently.

DOCSIS 1.0 DOCSIS 1.1 DOCSIS 2.0 DOCSIS 3.0 DOCSIS 3.1 FULL DUPLEX
DOCSIS 3.1
Highlights Initial cable broadband technology Added voice over IP service Higher upstream speed Greatly enhances capacity Capacity and efficiency progression Symmetrical streaming and increased upload speeds
Downstream Capacity 40 Mbps 40 Mbps 40 Mbps 1 Gbps 10 Gbps 10 Gbps
Upstream Capacity 10 Mbps 10 Mbps 30 Mbps 100 Mbps 1-2 Gbps 10 Gbps
Production Date 1997 2001 2002 2008 2016 TBD

TDD Frequencies Band & Spectrum

LTE BAND NUMBER

ALLOCATION (MHz)

Wideth of band(MHz)

33

1900 – 1920

20

34

2010 – 2025

15

35

1850 – 1910

60

36

1930 – 1990

60

37

1910 – 1930

20

38

2570 – 2620

50

39

1880 – 1920

40

40

2300 – 2400

100

41

2496 – 2690

194

42

3400 – 3600

200

43

3600 – 3800

200

Optimising Xen

Xenserver network performance can be a little unoptimised out of the box compared to products such as VMware, but there are a few easy steps you can take to bump up the speed.

Disable TCP Checksum offload

If you notice a drop in network performance you can do a dump of the network traffic to see if you are getting checksum mismatching/checksum errors then you can manually disable TCP checksum offloading for the hosts network adapter.

To create a dump run this command:

tcpdump -i eth0 -v -nn | grep incorrect

The main line here you are looking out for is “incorrect (-> 0x6e35)”. This error is showing that checksum is failing to receive, which means that the server is having TCP offloading issues. This can be easily fixed by identifying what has been marked as active on the network interface card for offloading. If you have multiple NIC’s you will need to do this for each one:

ethtool -k ethX
Output:
Offload parameters for eth0:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp-segmentation-offload: on
udp-fragmentation-offload: off
generic-segmentation-offload: on
generic-receive-offload: off
large-receive-offload: off
  • Scatter-Gather I/O – Rather than passing one large buffer, small buffers are passed which makes up large buffers. This provides more efficiency than large buffers passed.
  • TCP Segmentation Offload – It is the ability to frame data according to MTU size & same IP header with all packets. Useful when buffer is much larger than MTU on the link. The segmentation into smaller size is offloaded to NIC.
  • Generic Segmentation Offload – This is used to postpone the segmentation as long as possible. This performs the segmentation just before the entry into the driver’s xmit routine. GSO & TSO are only significantly effective only when MTU is much less than buffer size.
  • Generic Receive Offload – GSO only works for transmission of packets. This allows the packets to be re-fragmented at output. Unlike LRO which merges every packets, GRO merges with restriction keeping important fields in packet intact. NAPI API polls for new packets and process packets in batches before passing it to OS.
  • Large Receive Offload – This is used for combining multiple incoming packets into single buffer before passing it up to OS stack. Benefits of this is OS sees fewer packets & uses less CPU time.

Depending upon your NIC vendor, names of these processes may vary. Some vendors do provide additional offload processes.

Now you can run below script to disable TCP offload:

if [[ "$1" == "--local" || "$1" == "-l" ]]; then
    echo -n "disabling checksum offloading for local devices... "
    for iface in $(ifconfig | awk '$0 ~ /Ethernet/ { print $1 }'); do
        for if_mode in ${if_modes}; do
          ethtool -K $iface $if_mode off 2>/dev/null
        done
    done
    echo "done."
else
    echo -n "disabling checksum offloading in xapi settings... "
    for VIF in $(xe vif-list --minimal | sed -e 's/,/ /g')
    do
        ###xe vif-param-clear uuid=$VIF param-name=other-config
        for if_mode in ${if_modes}; do
            xe vif-param-set uuid=$VIF other-config:ethtool-${if_mode}="off"
        done
    done
    for PIF in $(xe pif-list --minimal | sed -e 's/,/ /g')
    do
        ###xe pif-param-clear uuid=$PIF param-name=other-config
        for if_mode in ${if_modes}; do
            xe pif-param-set uuid=$PIF other-config:ethtool-${if_mode}="off"
        done
    done
    echo "done."
fi

Powering off a virtual machine on an ESXi host from CLI

In ESXi 4.x and above, you can use the k command in esxtop to send a signal to, and kill, a running virtual machine process.

  1. On the ESXi console, enter Tech Support mode and log in as root.
  2. Run the esxtop utility by running this command:esxtop
  3. Press c to switch to the CPU resource utilization screen.
  4. Press Shift+v to limit the view to virtual machines. This may make it easier to find the Leader World ID in step 7.
  5. Press f to display the list of fields.
  6. Press c to add the column for the Leader World ID.
  7. Identify the target virtual machine by its Name and Leader World ID (LWID).
  8. Press k.
  9. At the World to kill prompt, type in the Leader World ID from step 6 and press Enter.
  10. Wait 30 seconds and validate that the process is not longer listed.