[HN Gopher] Dalai: Automatically install, run, and play with LLa...
       ___________________________________________________________________
        
       Dalai: Automatically install, run, and play with LLaMA on your
       computer
        
       Author : cocktailpeanut
       Score  : 46 points
       Date   : 2023-03-12 22:17 UTC (42 minutes ago)
        
 (HTM) web link (cocktailpeanut.github.io)
 (TXT) w3m dump (cocktailpeanut.github.io)
        
       | cocktailpeanut wrote:
       | Hey guys, I was so inspired by the llama.cpp project that I spent
       | all day today to build a weekend side project.
       | 
       | Basically it lets you one-click install LLaMA on your machine
       | with no bullshit. All you need is just run "npx dalai llama".
       | 
       | I see that the #1 post today is a whole long blog post about how
       | to walk through and compile cpp and download files and all that
       | to finally run LLaMA on your machine, but basically I have 100%
       | automated this with a simple NPM package/application.
       | 
       | On top of that, the whole thing is a single NPM package and was
       | built with hackability in mind. With just one line of JS function
       | call you can call LLaMA from YOUR app.
       | 
       | Lastly, EVEN IF you don't use JavaScript, Dalai exposes a
       | socket.io API, so you can use whatever language you want to
       | interact with Dalai programmatically.
       | 
       | I discussed a bit more about this on a Twitter thread. Check it
       | out:
       | https://twitter.com/cocktailpeanut/status/163504032247148953...
       | 
       | It should "just work". Have fun!
        
         | teruakohatu wrote:
         | Very nice. Anyway to add an option to install elsewhere other
         | than ~/ ?
        
         | yawnxyz wrote:
         | Wow that's so incredible. Thanks for putting this together!
         | 
         | Do you have any machine specs associated with this? Can an old-
         | ish Macbook Pro run this service?
         | 
         | I'm also curious, since I'm new to all this -- is it possible
         | to run something like this on Fly.io or does it take up way too
         | much space?
        
           | sp332 wrote:
           | 7B is the default. If it's quantized to 4 bits, that's a 3.9
           | GB file.
        
         | GordonS wrote:
         | Looks great! Does it work on Windows please?
        
           | volaski wrote:
           | Curious too. Let me know if you try it out. Technically I
           | think it should work.
        
       | mahathu wrote:
       | Best name for a software project I've seen in a long time hands
       | down!
        
         | ilrwbwrkhv wrote:
         | I don't think anybody would have the guts to do this with
         | Muhammad or the Quran.
        
       | teruakohatu wrote:
       | I tried the 7b version. It goes off on a tangent for just about
       | every prompt. Any one know how to avoid this?
       | 
       | For example:
       | 
       | <Prompt># Write a simple python function that # 1. Ask me for a
       | number in mile # 2. It converts miles to kilometers
       | 
       | import </prompt>
       | 
       | <response>math, random #define a variable distance =
       | input('Please enter the Miles:') Average Gasoline Prices By State
       | In July [CHART] Gas prices rose last week as crude oil futures
       | spiked higher amid concerns about tensions in Egypt and North
       | Africa. The average national price of gas increased to $3.50 per
       | gallon from Thursday, up 21 cents a day after the Energy
       | Information Administration reported that wholesale motor fuels
       | inventories declined last week by more than half-a-million
       | barrels (http://www.eia.gov/cneaf/electricity/epm_table3.html)
       | The U.S.-Canadian border was the scene of a mass shooting
       | yesterday afternoon, which left one person dead and several
       | others injured. Police say that this latest incident in
       | Bellingham is likely linked to an ongoing dispute between two
       | rival motorcycle clubs (http://www NULL .kulr8 NULL
       | com/Global/story/Local-news/120753946---NULL--). Solar energy
       | company GreenCharge Networks Inc. has hired several high profile
       | lawy</response>
        
       | mikenew wrote:
       | I know this is a bit tangential (awesome work OP), but has anyone
       | been able to get usable, consistent results from this thing? I've
       | been playing around with the 13B model with llama.ccp, and while
       | I do sometimes get good results, it often just gives me weird,
       | repetitive nonsense.
       | 
       | I know it hasn't been fine tuned on instructions or had RLHF like
       | ChatGPT, but has anyone figured out how to kinda work around it
       | and actually use it in the way you can ask ChatGPT a question and
       | typically get something coherent and useful out of it?
        
         | lalwanivikas wrote:
         | I just explained one solution on Twitter:
         | https://twitter.com/LalwaniVikas/status/1635035951654387712
        
         | delusional wrote:
         | I've been playing around with the 30B version all day. The
         | biggest improvement I've seen have come from changing the way I
         | prompt (strike a more in medias res style, the model really
         | likes continuing and gets confused if you give it a blank
         | slate), and implementing top_k sampling (also discard the
         | top_p=0 nonsense, you want top_p>1.0 to turn it off). It's
         | important to note that the llama.cpp project does NOT implement
         | top_k, even if you set that commandline parameter.
        
       | nstbayless wrote:
       | This looks really cool! How many gigs is the model that's
       | installed this way? If it's large it would be nice to include a
       | disclaimer.
        
       ___________________________________________________________________
       (page generated 2023-03-12 23:00 UTC)