[HN Gopher] Circuit Design and Applied Optimization (Part 1)
       ___________________________________________________________________
        
       Circuit Design and Applied Optimization (Part 1)
        
       Author : stefanpie
       Score  : 61 points
       Date   : 2021-12-31 17:17 UTC (5 hours ago)
        
 (HTM) web link (stefanabikaram.com)
 (TXT) w3m dump (stefanabikaram.com)
        
       | mayapugai wrote:
       | I'm just here to say this article was such a pleasure to read.
       | I've always relied on intuition to pick good-enough values based
       | on experience. So this rigorous analytical approach to viewing
       | the objective space and picking a reasonable value is refreshing.
       | It's ironic and rightfully hilarious that you still had to take a
       | gander and pick a "close-enough" value.
       | 
       | I look forward to part 2 where you incorporate the resistor
       | choices that you had. Perhaps also include the statistics of the
       | tolerance into the mix to find the optimal values that we should
       | all be picking for our future 555 timer hijinks.
       | 
       | Your academic research work is also very interesting. Suffice to
       | say I'll be following you on GitHub :)
       | 
       | Cheers and happy (almost) new year!
        
       | neltnerb wrote:
       | This reminds me of an undergraduate course on diffusion (atomic
       | scale movement of atoms in a solid).
       | 
       | There are known formal differential equations that let you solve
       | for the exact diffusion profile given geometry, composition,
       | diffusion rate constants, and temperature (basically). So on the
       | homework they asked us to tell them how long it took for the
       | concentration of the dopant to reach 10% at a 10 micron depth.
       | 
       | So of course we all reached for this new math we had learned.
       | 
       | We all got it wrong with the professor commenting that "you
       | should have just used the approximate formula [which was trivial
       | algebra], we only know the diffusion rate to an order of
       | magnitude anyway". This was far more useful as feedback than the
       | 0.2% of our grades we missed credit for.
       | 
       | So I look at this essay and am a bit amused at minimizing
       | calculation error to such an extent in a mathematical model when
       | your resistors have a 5% tolerance and your capacitor is probably
       | at 20% tolerance if you happened to have it in a university EE
       | lab. But I do appreciate the fun of doing it this way too if
       | you're not in a hurry =)
       | 
       | It is crucial for any systems designer to realize that no matter
       | how well you do your math and theory you have to also understand
       | the sensitivity to variability so that you know what math is
       | worth doing and what components we have no choice but to control
       | the tolerances on better.
       | 
       | You can find some fancy software for generating filter networks
       | (similar in concept to this except with more discrete math
       | because component values are discrete...) and ask it to show you
       | sensitivity and you can see exactly how much you'll screw up your
       | perfectly designed 10th order Chebyshev filter if that last
       | resistor is off by 1%...
        
         | stefanpie wrote:
         | Oh interesting application. I have taken a class in
         | microelectronics physics (doping, electron holes, bandgaps,
         | mosfets, bjts) and it was definitely just focusing on simpler
         | approximations and first and second order effects only.
         | 
         | But you are correct in that after my second or third
         | electronics lab I realized that even standard component values
         | vary to much that this approach is a little overkill and non-
         | practical in a lot of cases. However there is a way to bake in
         | discrete component values into the continuous optimization
         | problems by using a continuous/"soft"/differentiable
         | approximation of the min function to make "pockets" of optimal
         | regions close to component values part of a set (for example e
         | series values). I plan to do more writeups exploring this idea
         | as well as looking at more complex applications such as higher
         | order filters as you mentioned (my op-amp filter labs are what
         | motivate me to look into this as I wasted so many hours in open
         | lab trying to get my component values right).
        
           | sidpatil wrote:
           | > However there is a way to bake in discrete component values
           | into the continuous optimization problems by using a
           | continuous/"soft"/differentiable approximation of the min
           | function to make "pockets" of optimal regions close to
           | component values part of a set (for example e series values).
           | 
           | Is this the same concept as uncertainty sets in a robust
           | optimization problem?
        
             | stefanpie wrote:
             | After some quick searching (https://www.princeton.edu/~aaa/
             | Public/Teaching/ORF523/S16/OR...), I don't think so. It
             | looks like for robust optimization problem you don't know
             | for certain what the objective function is or have
             | uncertain data measurements for the objective function
             | which I don't think is the case here. I could be completely
             | wrong though.
             | 
             | From my understating, if you want to only able to only pick
             | form a discrete set of components for certain variables,
             | you are essentially transforming the problem into a Mixed-
             | Integer Non-Linear Optimization problem (MINLP). I tried to
             | find and easy way to do this with some python optimization
             | libraries but they always needed other packages which were
             | hard to install on Windows. So my solution was to "relax"
             | the discrete constraints by using a continuous
             | approximation of the minimum function when looking at the
             | error from the nearness comment value that is possible.
             | This also lets you assign weighting to have a tradeoff
             | between having realistic comment values vs. having lower
             | error for your main objective.
        
           | neltnerb wrote:
           | Yeah, at the least you can just sample from a gaussian
           | distribution around the nominal value; the shape is probably
           | wrong but it's got to be better than assuming no error. Or
           | you could just do error analysis on the formula to find out
           | which terms contribute the most to the final error.
           | 
           | I like that your suggestion is in-line with how we now can
           | use an autoencoder to convert a molecular graph into a
           | continuous latent space which can be used to train a network
           | to predict some property. Now we have a latent space where
           | things that have similar property predictions are near each
           | other in a space we can explore and then decode into new
           | candidate materials/molecules.
           | 
           | Drug discovery, solar panels, I'm looking forward to the
           | field taking off industrially for sure. Applying machine
           | learning and advanced computational techniques to ad-tech is
           | very depressing with problems like these out there.
        
         | xondono wrote:
         | I had a professor that intentionally baked _hard_ (for the
         | course takers anyway) mathematics into some problems, that were
         | technically solvable during an exam, but that required very
         | good level (they typically included rare /obscure identities).
         | The point was that solving precisely was always a "trap".
         | 
         | His pet peeve was that engineering wasn't about finding a
         | precise answer at some point, but about finding good enough
         | answers fast.
         | 
         | I liked that this forced you to stay alert during the problems
         | for simplifications and to understand what you were doing,
         | instead of just droning out method A for solving problem type
         | B.
        
           | cinntaile wrote:
           | If the course is not a mathematical course that requires this
           | knowledge then adding obscure identities is focusing on the
           | wrong thing entirely. Some people will simply not see these
           | identities because they might never have used them.
           | Irrelevant for the course but relevant for the grade, that's
           | not a good combo imo.
        
             | mlyle wrote:
             | I read the parent's comment differently from your reading.
             | 
             | I read the parent's comment to say that there was stuff
             | that was barely possible for the best students to solve
             | precisely in the time given, but that there were obviously-
             | acceptable approximations thereof. And that either was a
             | path to full credit.
             | 
             | The professor's goal was to get the students to realize
             | when to use approximations.
        
               | cinntaile wrote:
               | It seems like I misread the original comment indeed.
        
             | xondono wrote:
             | The point was to realize precisely that some problems might
             | have analytical solutions, but they would take too long. If
             | you're solving problems on pen and paper nowadays, you're
             | most likely looking for an approximate answer anyway.
             | 
             | Maybe I should add that, as long as the answer was within a
             | specified margin (say 5%), the answer was considered
             | correct.
             | 
             | If you saw that your calculations started to become way too
             | complex, you had already missed something.
        
       | xondono wrote:
       | You might want to look into whatever you are using to render
       | LaTeX, because it doesn't work on iOS.
       | 
       | I've had zero problems with Mathjax so far.
        
         | stefanpie wrote:
         | I'll take a look at this, thank you for the node. I believe I'm
         | using mathjax currently but I may have set set some options
         | correctly. I also need to go in and verify a11ty as well.
        
       | [deleted]
        
       | hyperman1 wrote:
       | The formula for the duty cycle makes no sense. My rusty brain
       | thinks it should be D=(R1+R2)/(R1+2R2), but it's multiple decades
       | ago I used it so I might be wrong
        
         | stefanpie wrote:
         | You are totally correct, this is only a typo in latex which I
         | have just pushed a fix for. Thank for you the note.
        
           | hyperman1 wrote:
           | Now it says 404. The correct URL has a -1 after it
           | 
           | Update: Eight o'clock and all is well ;-). Thanks for the
           | interesting article.
        
             | stefanpie wrote:
             | Should be back to normal again, apologies for the issues,
             | web dev is not my strong point
        
       | docfort wrote:
       | I believe this is supposed to be the first in a series on moving
       | from continuous to discrete optimization, but the EE in me can't
       | help but point out what I would do in this scenario. It also
       | connects with other interesting aspects of physics.
       | 
       | Looking at the governing equations, you can clearly see that if
       | R1 << R2, then the duty cycle is close to 50%.
       | 
       | With that done, I also would have fixed the cap to something that
       | is available. Ignoring R1 for the moment (because I just need to
       | ensure R2 is bigger), I solve for R2 in the frequency equation.
       | It is approximately 72 kOhms.
       | 
       | I notice that a nonzero value of R1 is really there to tune the
       | frequency. As long as R1 is much smaller than R2, then the
       | frequency equation is more sensitive to changes in R1 than the
       | duty cycle equation. So I can play with different small values of
       | R1 to tune my frequency to get closer.
       | 
       | Finally, since I know that I'm likely using imprecise resistor
       | tolerances, I know that I can pass if I just get close, so I
       | might not need to be picky about R1.
       | 
       | In my opinion, this chain of reasoning (effective modeling,
       | function sensitivity wrt parameters, tolerance specs) is what the
       | lab experiment is actually about. Developing circuits that are
       | tolerant to parameter variation is the key to real hardware
       | products. That gives you so much flexibility in price and
       | manufacturer and requires the designer to keep this kind of
       | reasoning frontmost.
        
         | hermitdev wrote:
         | One of my degrees is in EE. This post surely took me back to
         | some of the frustrating design labs and more unknowns than
         | equations to solve them... I also definitely went down the
         | route of just fixing the cap, because well, we only had like 2
         | or 3 of them in our parts kit, anyway. And I appreciate you
         | mentioning the resistor tolerances because, no, you don't
         | really have a 1 kOhm resistor. You have a 987 or 1009 or
         | something else near 1000 Ohm resistor.
        
         | kurthr wrote:
         | Exactly, and if you know you're not going to get better than 1%
         | metal film resistors, why search more than the discrete space.
         | Between 10^2 and 10^6 there's only 6-96 steps per decade (12
         | for the standard 10% E12 series). There's less than 400x400
         | possible resistor values to check.
        
           | stefanpie wrote:
           | I actually ended up doing some variation of this for scripts
           | I wore for my labs. The space is actually quite small and
           | it's not hard to brute force this problem. Once you get to
           | more open ended designs and larger parameter spaces (like
           | filters) it becomes a bit more challenging to brute force.
        
       | hyperman1 wrote:
       | In feynman's 'Surely you're joking' there's a chapter where he
       | designs some machinery for the army. He gets as guidance to use
       | cogs from the middle of the list of available options, as the
       | smallest or largest parts have all kinds of downsides.
       | 
       | This idea works well in all kinds of situations where you have to
       | select parts. I assume it might do well here, too.
        
         | triactual wrote:
         | This works until Microsoft also picks from the middle to make
         | ten trillion Xbox controllers overnight and you can't get parts
         | for six months. There are so many competing constraints just in
         | component selection.
        
       ___________________________________________________________________
       (page generated 2021-12-31 23:00 UTC)