How to Install Ruby on Rails on Raspberry Pi 3

Type the following command into the Terminal window, once you’ve connected via SSH.

If you come across a 404 Error, you’ll likely need to update the package index, and this can be done using the following command.

Whilst you’re already getting the required packages, you’ll need to retrieve the SSL package, SQL database package, and more.

Open up the RVM, straight from its repository on GitHub. ( sudo gpg2 –recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3)

Once you’ve successfully installed the required packages, and have opened up the RVM, it’s recommended that you run a script, just so that you can enable Ruby.

You should now have successfully installed Ruby, and Rails. You can test for Ruby by typing the following command.

If installed correctly, you’ll see a message confirming which version of Ruby is installed, when it was produced, and what it’s using in order to work correctly.

You can also test for Rails by typing in the following command.

HP’s Ethernet Virtual Interconnect (EVI) vs VPLS and Cisco’s OTV

chart: comparing HP EVI with Cisco OTV and VPLS

 

HP EVI is a MAC-over-GRE-over-IP solution. Ethernet frames are encapsulated into GRE/IP at ingress to the switch. The GRE/IP packets are then routed over the WAN connection between the data centers.

EVI adds a software process to act as control plane to distribute the MAC addresses in each VLAN between the EVI-enabled switch. Thus, the switch in data center A updates the MAC address table in data center B and vice versa. By contrast, in traditional use, Ethernet MAC addresses are auto-discovered as frames are received by the switch.

Because HP has chosen to use point-to-point GRE, the EVI edge switch must perform packet replication. Ethernet protocols such as ARP rely heavily on broadcasts to function. In a two-site network this isn’t problem, but for three sites or more, the EVI ingress switch needs to replicate a broadcast EVI frame to every site. HP assures me that this can be performed at line rate, for any speed, for any number of data centers. That may be so, but creating full mesh replication for n* (n-1) WAN circuits could result in poor bandwidth utilization in networks that have high volumes of Ethernet broadcasts.

Cisco’s OTV is also MAC-over-GRE-over-IP (using EoMPLS headers), but it adds a small OTV label into the IP header. The OTV control plane acts to propagate the MAC address routing table.

Like HP’s EVI, OTV can complicate load balancing. Cisco’s Virtual Port Channel (vPC) shares the control plane, while HP’s IRF shares the data plane. Although a vPC-enabled pair of Nexus 7000 switches run as autonomous control planes, NX-OS can load balance evenly using IP. OTV load balances by using a 5-tuple hash and will distribute traffic over multiple paths for the WAN.

OTV also supports the use of multicast routing in the WAN to deliver a much more efficient replication of Ethernet broadcasts in large-scale environments. Instead of meshing a large DCI core, a Source Specific Multicast (with good reasons) should be more efficient for multiple sites. Badly designed applications, such as Microsoft NLB, will be much more efficient using multicast.

For many enterprises, MPLS is not a consideration. MPLS is a relatively complex group of protocols that requires a fair amount of time to learn and comprehend. However, building mission-critical business services that aren’t MPLS is really hard. Service providers can offer L2 DCI using their MPLS networks with VPLS. Operationally, enterprise infrastructure is diverse and customised to each use case. Service provider networks tend toward homogeneity and simplicity because of the scale.

Some enterprises will buy managed VPLS services from service providers. They will also discover that such VPLS services are of variable quality, offer poor loop prevention, and can be expensive and inefficient. (For more, see the above-referenced report.) This is what drives Cisco and HP to deliver better options in OTV and EVI.