nly an hour ago

Given how slow protobufs and grpc is, I wonder if the socket transport would ever be the throughout bottleneck here.

Changing transports means if you want to move your grpc server process to a different box you now have new runtime configuration to implement/support and new performance characteristics to test.

I can see some of the security benefits if you are running on one host, but I also don't buy the advantages highlighted at the end of the article about using many different OS's and language environments on a single host. Seems like enabling and microptimising chaos instead of taming it.

Particularly in the ops demo: Statically linking a C++ grpc binary and standardising on host OS and gcc-toolset doesn't seem that hard. On the other hand, if you're using e.g. a python rpc server are you even going to be able to feel the impact of switching to vsock?

Veserv 2 hours ago

Says it is fast, but presents zero benchmarks to demonstrate it is actually fast or even “faster”. It is shameful to make up adjectives just to sound cool.

  • rwmj an hour ago

    vsock is pretty widely used, and if you're using virtio-vsock it should be reasonably fast. Anyway if you want to do some quick benchmarks and have an existing Linux VM on a libvirt host:

    (1) 'virsh edit' the guest and check it has '<vsock/>' in the <devices> section of the XML.

    (2) On the host:

      $ nbdkit memory 1G --vsock -f
    
    (3) Inside the guest:

      $ nbdinfo 'nbd+vsock://2'
    
    (You should see the size being 1G)

    And then you can try using commands like nbdcopy to copy data into and out of the host RAM disk over vsock. eg:

      $ time nbdcopy /dev/urandom 'nbd+vsock://2' -p
      $ time nbdcopy 'nbd+vsock://2' null: -p
    
    On my machine that's copying at a fairly consistent 20 Gbps, but it's going to depend on your hardware.

    To compare it to regular TCP:

      host $ nbdkit memory 1G -f -p 10809
      vm $ time nbdcopy /dev/urandom 'nbd://host' -p
      vm $ time nbdcopy 'nbd://host' null: -p
    
    TCP is about 2.5x faster for me.
    • gpderetta 35 minutes ago

      Is nbdcopy actually touching the data consumer side or is splicing to /dev/null ?

      • rwmj 7 minutes ago

        It's actually copying the data. Splicing wouldn't be possible, since NBD is a client/server protocol.

        The difference between nbdcopy ... /dev/null and nbdcopy ... null: is that in the second case we avoid writing the data anywhere and just throw it away inside nbdcopy.

    • imiric 28 minutes ago

      Ah, thanks. That is a much better example than the one in the article.