[HN Gopher] From models of galaxies to atoms, simple AI shortcut...
       ___________________________________________________________________
        
       From models of galaxies to atoms, simple AI shortcuts speed up
       simulations
        
       Author : DarkContinent
       Score  : 35 points
       Date   : 2020-02-15 14:02 UTC (8 hours ago)
        
 (HTM) web link (www.sciencemag.org)
 (TXT) w3m dump (www.sciencemag.org)
        
       | [deleted]
        
       | Fomite wrote:
       | Interestingly, my lab has been working in emulators for one of
       | our simulation models, and we're _really_ struggling to make
       | meaningful improvements.
       | 
       | It's faster, but we're not there yet on accuracy.
        
       | willis936 wrote:
       | I was at a talk last week where the speaker spent a little bit of
       | time on using machine learning on a regression matrix that is
       | trained by the results of a simulation. The simulation and
       | variables in the regression matrix were chosen such that the AI
       | could recreate an approximation of a known physical law. This is
       | fairly exciting to me because if used to recreate a lot of laws
       | in this field, it could then be used on experimental data to
       | untangle some of the mess and identify the relationships for us.
       | I could see this speeding along development of science.
        
       | fxtentacle wrote:
       | "When they were turbocharged with specialized graphical
       | processing chips, they were between about 100,000 and 2 billion
       | times faster than their simulations."
       | 
       | Now the critical question is: How much faster is it without AI,
       | just because of the specialized dedicated processing chips?
       | 
       | Otherwise, they might be comparing a single virtualized CPU core
       | against a high-end GPU for things like matrix multiplication ...
       | and then the result that GPU > slow CPU isn't really that
       | impressive.
        
         | rrss wrote:
         | An alternative question is: how much faster is it with the
         | neural network-based emulation ("AI"), without the used of the
         | specialized dedicated processing chips? I think the answer to
         | this gives the information you are looking for.
         | 
         | The paper answers this question:
         | 
         | > While the simulations presented typically run in minutes to
         | days, the DENSE emulators can process multiple sets of input
         | parameters in milliseconds to a few seconds with one CPU core,
         | or even faster when using a Titan X GPU card. For the GCM
         | simulation which takes about 1150 CPU-hours to run, the
         | emulator speedup is a factor of 110 million on a like-for-like
         | basis, and over 2 billion with a GPU card. The speed up
         | achieved by DENSE emulators for each test case is shown in
         | Figure 2(h)
        
         | allovernow wrote:
         | >Now the critical question is: How much faster is it without
         | AI, just because of the specialized dedicated processing chips?
         | 
         | Based on similar work we are doing at the startup I work for,
         | this isn't just GPU magic. ML is a heuristic alternative to
         | simulations which already operate on specialized GPUs and TPUs.
         | This modeling acceleration is one of the many ways in which ML
         | is poised to change everything.
         | 
         | The same way that a human can, for instance, approximately draw
         | iso-temperature lines around a candle flame, without having to
         | perform simulations...except the neural net is some 99%+ as
         | accurate and detailed as a full simulation. That's exactly why
         | neural nets excel - they learn complex heuristics much like
         | humans do, but with the added power of digitized computation
         | and memory.
        
       | aimoderate wrote:
       | > It randomly inserts layers of computation between the networks'
       | input and output, and tests and trains the resulting wiring with
       | the limited data. If an added layer enhances performance, it's
       | more likely to be included in future variations.
       | 
       | Sounds a lot like genetic algorithms but with neural networks. I
       | suspect we'll see more of this as people figure out how to run
       | the search over neural network architectures that fit their own
       | domains. Convolutions and transformers are great and all but we
       | might as well let the computers do the search and optimization as
       | well instead of waiting on human insights for stacking functions.
        
       | dukoid wrote:
       | For some reason this reminds me of the famous xerox copier where
       | the compression algorithm would swap out digits:
       | https://news.ycombinator.com/item?id=6156238
        
       | chewxy wrote:
       | Who'd think compression works so well?
       | 
       | (yes, neural networks are compression engines)
        
         | agumonkey wrote:
         | I always thought programming and even theory were knowledge
         | compression
        
         | andbberger wrote:
         | Not necessarily
        
         | tanilama wrote:
         | Compression in your context is as meaningless as
         | Generalization.
         | 
         | Yes, you can say generalization is compression.
        
           | fxtentacle wrote:
           | Except that "generalization" implies that it works for
           | previously unseen problems, which is usually not the case for
           | AI.
           | 
           | Compression, on the other hand, nicely captures the "learn
           | and reproduce" approach that using AI entails.
        
             | tanilama wrote:
             | Unseen problems is a ill defined term. There is a
             | distinction between in domain and out of domain, both can
             | be unseen by the model before.
             | 
             | Even human as agent requires training before being deployed
             | to unseen problems. Generalization is conditioned on
             | experience, after all.
             | 
             | AI generalizes to unseen in domain data given a specific
             | task. That is why it is useful in the first place.
        
         | nabla9 wrote:
         | This is not meaningful analogy due to being too generic. Using
         | it does not add anything to discussion.
         | 
         | Any mathematical model is 'compressed' form of reality and
         | that's why they works well. Instead of compressed, simplified
         | or abstracted, is better term. Machine Learning adds heuristic
         | data driven model to scientific model.
        
         | FartyMcFarter wrote:
         | Do you mean in the same way that any mathematical function is a
         | compression engine? That is, you implement something that can
         | handle many cases (1+1, 2+3, 5+6) in a concise form?
         | 
         | It seems to me like the real magic of neural networks is that
         | they make it easier to search for a function that solves (to
         | some extent) a particular problem.
        
       ___________________________________________________________________
       (page generated 2020-02-15 23:00 UTC)