[HN Gopher] VAccel: Hardware Acceleration for Lightweight Hyperv...
       ___________________________________________________________________
        
       VAccel: Hardware Acceleration for Lightweight Hypervisors
        
       Author : _ananos_
       Score  : 43 points
       Date   : 2020-12-04 15:29 UTC (7 hours ago)
        
 (HTM) web link (vaccel.org)
 (TXT) w3m dump (vaccel.org)
        
       | gorbypark wrote:
       | This seems pretty neat as I've just started working a project
       | using firecracker. Are the virtio-accel frontend/backends both
       | kernel modules?
        
         | _ananos_ wrote:
         | the virtio-accel frontend is a kernel module. The backend is
         | VMM specific, so there is one for QEMU and one for AWS
         | Firecracker. Check out
         | https://blog.cloudkernels.net/posts/vaccel_v2/ to give it a try
         | and let us know your thoughts!
        
           | gorbypark wrote:
           | Ahh, that makes sense. Thanks!
        
       | _ananos_ wrote:
       | More info on how we use it on AWS Firecracker is available on our
       | blog post: https://blog.cloudkernels.net/posts/vaccel_v2/ and our
       | github pages: https://github.com/cloudkernels/vaccelRT,
       | https://github.com/nubificus/docker-jetson-inference
        
         | landerwust wrote:
         | The block diagram here:
         | https://blog.cloudkernels.net/static/vaccel_v2/vaccelrt.png#...
         | should probably have some close variant on your home page, I've
         | been around for 20 years and couldn't tell /at all/ what vAccel
         | is.
         | 
         | My best guess was something to do with making VMs run faster
        
           | _ananos_ wrote:
           | thanks for your feedback! we will try to clarify things on
           | our next posts!
           | 
           | in short vAccel is a framework that translates function calls
           | from users (upper side of this diagram) to the relevant
           | functions of the respective acceleration framework (lower
           | side of the diagram). For instance, calling a function like
           | image_classify (user function) would result in the respective
           | image_classify of jetson-inference, which would, in turn,
           | execute the image classification function on the GPU and
           | return the result to the user.
           | 
           | hope that makes more sense!
        
       | amscanne wrote:
       | Based on the blog posts & diagrams linked here, the main
       | innovation seems to be the use of virtio as a transport mechanism
       | for the high-level "accelerated" functions. The portable API is a
       | good idea, but could just as easily connect to a well-known gRPC
       | service address (which could be always on-host, or not). In that
       | case, the transport would be good ol' virtio-net (or whatever),
       | but is otherwise the same to the user.
       | 
       | I suppose that with virtio, you can perhaps eliminate one copy
       | (but not necessarily, since you'll probably need to copy from the
       | guest memory into some device-mapped buffer). Other than it being
       | fun to write virtio backends for various hypervisors, I'm not
       | sure what the compelling advantage is over the more naive
       | transport? Perhaps I'm missing something.
        
         | _ananos_ wrote:
         | indeed! virtio helps us tailor the transport layer to an
         | acceleration use-case, rather than just frames to be
         | transmitted or blocks to be completed. The interesting turn on
         | vAccelRT (of the runtime system that is) is that apart from the
         | simplicity for running on a VM that the virtio backend gives,
         | it seems that people are eager to use the vAccelRT mechanism,
         | to map a complicated piece of code to a simple function call
         | that is hardware agnostic. We'll see how this will turn out...
         | 
         | After all, API remoting is out there for quite some time (see
         | rCUDA); some people are using it, but we think that a more
         | general, semantic abstraction needs to be introduced...
         | especially for the Serverless use-case.
        
       | tux1968 wrote:
       | So rather than new hardware, this seems to be an application
       | level API to abstract away hardware and provide portability
       | between the supported targets.
        
         | _ananos_ wrote:
         | yeap, the initial idea was to cover VMs (and one of the use
         | cases is indeed Serverless, for instance AWS Firecracker), but
         | as it turns out, there are users that might benefit from this
         | simplified abstraction in general.
        
       ___________________________________________________________________
       (page generated 2020-12-04 23:01 UTC)