Posts Tagged ‘network’

Netcat can be of great help in trasferring files across network, that too in a really scalable pipeline. For bear minimum usage of file transfer, only tar and netcat utilities are required.

I tried different options and found the following superset pipeline.

sender:$ tar cf - some_directory | gzip -9 | \
  pv | gpg -c | nc -q0  25000
receiver:$ nc -l -p 25000 | gpg -d | gzip -d | \
  pv | tar xf -

Some Caveats

  • tar and nc do not offer any security, that is why gpg is used. In a trusted network, gpg pipeline elements could be removed from both sender and receiver.
  • pv could monitor the datarate very effectively. pv is not used symmetrically in the above pipe line. In the sender side it shows data rate of compressed stream whereas in the receiving side it shows the data rate of uncompressed stream. This is done deliberately so that the user could get an idea about both compressed/uncompressed data rates. By reordering gzip and pv, this could be reversed.
  • Instead of using tar and gzip separately, once could combine them and use ‘cfz’ and ‘xfz’. But, this limits the differential data rate as explained above. Fine control would be possible only by splitting them. Depending upon network throughput and CPU performance, either XZ or Lzop could be used instead of Gzip. The former offers extremely good compression, whereas the latter gives very good performance.

Read Full Post »

There are many network throughtput and performance measurement tools available. One of the most widely used one is “iperf”. But, I found “netcat” to be a very versatile and fantastic tool to do the same.

The only “extra” requirement is to have shell access to both sides, so that netcat could be used.

First install the required tools
apt-get install netcat-traditional bwm-ng pv

Considering “machine1” and “machine2” are the two systems on both side of the link. Do the following

machine1:$ nc -l -p 5000 | pv > /dev/null
machine2:$ dd if=/dev/zero | pv | nc ip_address_or_hostname_of_machine1 5000

Netcat of machine1 opens a TCP listening socket and dumps the data to /dev/null. machine2 reads from /dev/zero and forwards that data to machine1 through netcat. Here, flow control is done by TCP stack and keeps the throughput just below the network link speed. Normally “pv” prints the pipe throughput in bytes, so multiply the real time value of “pv” by 8 to get the throughput in bits.

Tools like “bwm-ng” could be used to monitor the througput of the the interface instead of or along with “pv” also.

Some version of of netcat does not require “-p” in listening mode, so the comment could be “nc -l 5000” instead. In the sending side “cat” or “pv” could be used instead of “dd” also. But “dd” prints a nice summary when it is interrupted.

Of course, this is not completely one directional because TCP acknowledgements would be forwarded from machine1 to machine2, but that is much smaller compared to the data flowing in the forward direction.

If we have the liberty of having more shells, one more pair of netcat sessions could be opened in the reverse direction so that simultaneous/full duplex throughput could be measured.

I used this method consistently with Gigabit Ethernet and WiFi many times.

Read Full Post »

%d bloggers like this: