[HN Gopher] Implementing the Exponential Function
       ___________________________________________________________________
        
       Implementing the Exponential Function
        
       Author : hackernewsn00b
       Score  : 82 points
       Date   : 2020-06-28 19:14 UTC (3 hours ago)
        
 (HTM) web link (www.pseudorandom.com)
 (TXT) w3m dump (www.pseudorandom.com)
        
       | sakex wrote:
       | I was exactly looking for that today, thanks
        
       | m4r35n357 wrote:
       | [Aside on Taylor Series] The section on recurrence relations is
       | the basis of a little-known ODE solving technique sometimes known
       | as the Taylor Series Method (occasionally Differential
       | Transform). It allows an "extension" of Euler's Method to
       | arbitrary order (using arbitrary precision arithmetic), by
       | _calculating_ the higher derivatives rather than approximating
       | them as in RK4 and friends.
       | 
       | Here is an open access link to just one of many papers on the
       | subject: https://projecteuclid.org/euclid.em/1120145574
       | 
       | It includes a link to the actual software they used.
       | 
       | [EDIT] Obviously applying the technique above to dx/dt = x
       | reproduces the Taylor Series in the article!
        
       | olliej wrote:
       | Ugh I had to implement all the transcendental functions a while
       | ago. It was miserable, especially dealing with intermediate
       | rounding, etc
       | 
       | I don't recommend it.
       | 
       | That said I liked that there's a specific exp-1 function
       | (otherwise you drop most precision).
        
         | mjcohen wrote:
         | To go with that, you need ln(1+x). HP calculators had both.
        
       | kens wrote:
       | A couple other ways to compute the exponential function: The
       | Intel 8087 coprocessor used CORDIC. The chip contained a ROM with
       | the values of log2(1+2^-n) for various n. These constants allowed
       | the exponential to be rapidly computed with shifts and adds.
       | 
       | The Sinclair Scientific calculator was notable for cramming
       | transcendental functions into a chip designed as a 4-function
       | calculator, so they took some severe shortcuts. Exponentials were
       | based on computing 0.99^n through repeated multiplication.
       | Multiplying by 0.99 is super-cheap in BCD since you shift by two
       | digits and subtract. To compute 10^x, it computes
       | .99^(-229.15*x). Slow and inaccurate, but easy to implement.
        
       | BernardTatin wrote:
       | Very good article! Thanks!
        
       | dietrichepp wrote:
       | I recently implemented the exponential function for a sound
       | synthesizer toolkit I'm working on. The method I used was not
       | mentioned in the article, so I'll explain it here.
       | 
       | I used the Remez algorithm, modified to minimize equivalent input
       | error. This algorithm lets you find a polynomial with the
       | smallest maximum error. You start with a set of X coordinates,
       | and create a polynomial which oscillates up and down around the
       | target function, so p(x0) = f(x0) + E, p(x1) = f(x1) - E, p(x2) =
       | f(x2) + E, etc.
       | 
       | You then iterate by replacing the set of x values with the x
       | values which have maximum error. This will converge to a
       | polynomial with smallest maximum error.
       | 
       | From my experiments, you can use a fairly low order approximation
       | for musical applications. Used to calculate frequencies of
       | musical notes, 2nd order is within 3.0 cents, and 3rd order is
       | within 0.13 cents.
        
         | exmadscientist wrote:
         | Remez is the most common implementation of Minimax
         | approximation algorithms, so it's probably coming when the
         | author makes it to section 6.
        
           | fractionalhare wrote:
           | Yes, this is the plan :)
        
             | [deleted]
        
         | dvt wrote:
         | This is way cool! Mad respect to you audio engineers out there.
         | I tried implementing FFT last year in one of my open-source
         | projects[1] (key word: _tried_ ).
         | 
         | [1]
         | https://github.com/dvx/lofi/commit/285bf80ff6d7f0784c14270de...
        
       | dvt wrote:
       | This is an awesome article! One of my favorite SO answers I
       | remember researching dealt with the Pade approximation of tanh[1]
       | (which I found was significantly better than the Taylor
       | expansion), but the caveat was that it only worked within a very
       | narrow neighborhood.
       | 
       | I will say that the article didn't really touch on techniques
       | that minimize IEEE 754 subnormal[2] performance impact, which is
       | a very interesting problem in itself. A lot of real-life
       | implementations of something like e^x will have various clever
       | ways to avoid subnormals.
       | 
       | [1] https://stackoverflow.com/a/6118100/243613
       | 
       | [2] https://mathworld.wolfram.com/SubnormalNumber.html
        
         | fractionalhare wrote:
         | Hi, I am the author. You make a good point about subnormals,
         | thank you. I will jot that down for a future update to the doc
         | :)
         | 
         | It is funny you mention the Pade approximation; I was debating
         | whether or not to include that in this article. Originally I
         | planned to stop after Minimax (using Remez), but if I include
         | rational approximation schemes anyway (e.g. Caratheodory-
         | Fejer), it probably makes sense to implement and analyze that
         | as well.
        
           | dia80 wrote:
           | Just jumping into to say I am loving the article, thanks! I'm
           | dealing with a lot of exps in an optimization problem right
           | now and frankly only have cursory understanding of what's
           | going on under the hood so this was very enlightening for me.
           | Thanks again.
        
         | m4r35n357 wrote:
         | Just a suggestion, have you looked at the MPFR project, and
         | derivatives: https://www.mpfr.org/
         | 
         | http://mpfr.loria.fr/mpfr-current/mpfr.html#index-mpfr_005fs...
         | 
         | It is way to deep in the grubby details for my pay grade, but
         | who knows?
        
       | nimish wrote:
       | Need to be careful about confusing smooth (infinitely
       | differentiable) and analytic (= to Taylor series) functions.
       | 
       | > Any function with this property can be uniquely represented as
       | a Taylor series expansion "centered" at a
       | 
       | It's fairly easy to construct tame looking functions where this
       | isn't true.
        
         | fractionalhare wrote:
         | Nice spot, thank you! I have corrected that. I also included
         | your comment in an acknowledgments and errata section of the
         | doc.
        
         | bollu wrote:
         | Blessedly, in the complex analytic setting, these coincide, to
         | they not? [Attempting to sanity-check my own understanding
         | here]
        
           | contravariant wrote:
           | In fact it is enough for functions to be complex
           | differentiable once (in which case they'll automatically be
           | differentiable infinitely often).
        
           | ngoldbaum wrote:
           | Yes, all locally smooth complex functions (e.g. whose real
           | and imaginary parts satisfy the cauchy-riemann equations) are
           | analytic, one of the main reasons complex analysis is way
           | less of a pain than real analysis.
        
       ___________________________________________________________________
       (page generated 2020-06-28 23:00 UTC)