[HN Gopher] Physicists are building neural networks out of vibra...
       ___________________________________________________________________
        
       Physicists are building neural networks out of vibrations, voltages
       and lasers
        
       Author : pseudolus
       Score  : 237 points
       Date   : 2022-06-01 10:02 UTC (12 hours ago)
        
 (HTM) web link (www.quantamagazine.org)
 (TXT) w3m dump (www.quantamagazine.org)
        
       | cpdean wrote:
       | After-all, we tricked sand into thinking for us so it makes sense
       | that certain applications could run on other mediums.
        
       | reality_inspctr wrote:
       | but what does the universe think of us?
       | 
       | --my friend on signal
        
         | schmeckleberg wrote:
         | well, we haven't been gamma ray burst'd yet. _sheepish thumbs
         | up_
        
       | reality_inspctr wrote:
       | Bob Moog - who (basically) invented the synthesizer - was a
       | passionate organic gardener. His belief system, in many ways, saw
       | the two as similarly allowing humans to interface with the
       | intelligence of the universe.
        
         | TedDoesntTalk wrote:
         | > saw the two as similarly allowing
         | 
         | I love bob moog, so can you explain this a little further? How
         | is gardening a way to interface with the intelligence of the
         | universe?
        
       | kingkawn wrote:
       | makes ya wonder what sorts of computing was done by the ancients
       | with the natural materials they had available.
        
       | photochemsyn wrote:
       | Reads like science fiction becoming reality. In particular, the
       | science fiction series by Hannu Rajaniemi (Quantum Thief, Fractal
       | Prince, Causal Angel) has 'natural computational substrates' as
       | one of its themes.
       | 
       | This all seems to exist on the borderland between discrete and
       | continuous mathematics, which is a pretty fascinating topic.
       | Digital systems rely on discrete mathematics, while things like
       | fluid dynamics are much more in the world of continuous smooth
       | functions. It seems as if they're really building an interface
       | between the two concepts.
        
         | alach11 wrote:
         | Indeed. This is straight out of Permutation City by Greg Egan.
        
         | dane-pgp wrote:
         | I'm reminded of the Church-Turing-Deutsch principle[0] which
         | states that a universal computing device can simulate every
         | physical process.
         | 
         | Putting that another way, I think it means that anything that
         | can happen in the universe can be modelled by sets of equations
         | (which we might not have yet) which can be calculated on a
         | universal Turing machine.
         | 
         | There is the question of what can quantum computers do
         | polynomially or exponentially faster than a classical computer,
         | but I think it's accepted that all quantum computations can be
         | achieved classically if you don't mind waiting.
         | 
         | [0]
         | https://en.wikipedia.org/wiki/Church%E2%80%93Turing%E2%80%93...
        
       | RappingBoomer wrote:
       | the art and science of measurement is still quite an obstacle for
       | us today...
        
         | Agamus wrote:
         | Data science is exposing the limits of the paradigm of
         | individuation, on which mathematics is based. It is a flawed
         | simulacrum of a fluxing universe which never stops changing,
         | never solidifies into a value, a digit, an individual thing.
         | 
         | Mathematics as a reflection of reality presumes that there is a
         | pause button on the universe. This also explains why philosophy
         | has made no substantial progress in the past few thousand years
         | - it makes the same assumption in the idea of 'being', which is
         | an impossibility for the same reason.
        
           | mhh__ wrote:
           | I think data science suffers that more than mathematics
           | does...
        
             | Agamus wrote:
             | In my mind, mathematics assumes that things do not change
             | by saying that anything stays static for long enough to be
             | called a "one thing".
             | 
             | The philosophical basis of the concept of "one" is flawed,
             | in my mind. As such, the rest of it is a self-referential
             | invention, much like logic. While the universe seems very
             | much like it is written in the language of mathematics, it
             | is not.
             | 
             | On the same note, the metaphysical idea of 'being' makes
             | the same mistake, which explains why two thousand+ years of
             | metaphysics has been mostly spinning tires.
             | 
             | I think the research in this story is on to something.
        
               | meroes wrote:
               | What about something which changes is always equal to
               | itself? Or experience is real. Static statements. Curious
               | how you'd deny these kinds of things.
        
           | zmgsabst wrote:
           | > Mathematics as a reflection of reality presumes that there
           | is a pause button on the universe.
           | 
           | This sounds more like your personal biases than a fact about
           | mathematics.
        
             | hans1729 wrote:
             | I suppose the "as a reflection of reality" is the catch in
             | that phrase.
             | 
             | Is p built into the fabric of that which is absolute? Which
             | statements are we able to make about axioms that hold
             | outside of our reference frame?
        
               | zmgsabst wrote:
               | Pi describes relationships and outcomes we see in reality
               | when actions are performed -- and that abstract relation
               | explains the commonality in many experiences.
               | 
               | Eg, tossing match sticks relates to pi.
               | 
               | https://www.youtube.com/watch?v=sJVivjuMfWA
        
       | dhon_ wrote:
       | This reminds me of the method of calculating Fourier transform by
       | refracting light through a prism and reading off the different
       | frequencies. You get the "calculation" for free.
        
         | SilasX wrote:
         | Like how mirrors "compute" the (appropriately defined) reverse
         | of an image?
        
         | V-2 wrote:
         | This perspective fits nicely with the simulation theory.
         | 
         | If we accept it, for argument's sake, then what's happening is
         | essentially delegating the computation to the ultra-meta-
         | computer that runs the simulation.
        
           | Syzygies wrote:
           | Whether the universe is a simulation is unknowable, but the
           | universe could consist of thought. If so, this research is
           | dangerous; like the Trinity nuclear test, the conflagration
           | could alter our neighborhood of the universe.
           | 
           | I had a pretty convincing revelation last night that the
           | simulation was run by insects. I could only get back to sleep
           | by ridiculing myself for such a derivative thought. Or is
           | there a reason it's universal?
        
             | schmeckleberg wrote:
             | I think Diaspora by Greg Egan covers some of this territory
             | 
             | ...that or the much older idea that if the whole universe
             | is the dream of a dragon (or a butterfly or Chuang Chou)
             | then let's not do anything that's too startling or
             | implausible so we don't wake them up and end it all!
        
           | arrow7000 wrote:
           | It also fits nicely with the universe just being
           | mathematically consistent
        
             | V-2 wrote:
             | This misses the "computation" (being shifted from one layer
             | to another) aspect though.
             | 
             | Universe being mathematically consistent and being
             | simulated are completely orthogonal concepts.
        
               | arrow7000 wrote:
               | I understood your comment to be an argument in favour of
               | the simulation hypothesis. So my comment says that that
               | doesn't work.
               | 
               | On second reading though it seems like all you're
               | proposing is a mental model for 'analog' computation;
               | that it's like outsourcing the computation to a lower
               | level of hardware. Then yes I agree with that.
        
         | alliao wrote:
         | oh god, I can see it coming. elaborated analogue music player
         | for a special price. it's using nothing but light. the fuzzy
         | output will be it's feature; sought after by misdirected
         | audiophiles...
        
         | nurettin wrote:
         | This is solarpunk material.
        
         | stackbutterflow wrote:
         | Is it calculation or simulation?
        
           | Banana699 wrote:
           | Not much difference here, Calculation (or, more generally,
           | Computation) is the manipulation of abstract symbols
           | according to pure rules that may or may not represent
           | concrete entities, e.g. the simplification of polynomials
           | according to the rule of adding like powers.
           | 
           | Simulation is when we manipulate things (concrete or
           | abstract) according to the rules that govern other concrete
           | things, e.g. pushing around balls in circles to (highly
           | inaccurately) represent the orbit of planets around a star.
           | 
           | Not all calculation is simulation, and not all simulation is
           | calculation, but there exists an intersection of both.
           | 
           | The key trick you can do with that last category is that when
           | the physical system you're simulating is controllable enough,
           | you can use the correspondence in the other direction: Use
           | the concrete things to simulate the abstract things. It's
           | simulation, because you're manipulating concrete entities
           | according to the rules that govern other entities (who happen
           | to be abstract),but what you're doing also amounts to doing a
           | calculation with those abstract entities.
        
           | [deleted]
        
         | ulnarkressty wrote:
         | An even better one - holding an image at the focal point of a
         | lens produces its Fourier transform at the focal point on the
         | other side of the lens[0]. It is used for "analog" pattern
         | matching[1]. There is an interesting video explaining this on
         | the Huygen Optics Youtube channel[2].
         | 
         | [0] - https://en.wikipedia.org/wiki/Fourier_optics
         | 
         | [1] - https://en.wikipedia.org/wiki/Optical_correlator
         | 
         | [2] - https://www.youtube.com/watch?v=Y9FZ4igNxNA
        
         | Enginerrrd wrote:
         | Analog computers are pretty awesome!
         | 
         | Say you take a standard slide rule with two log scales, and
         | want to do a division problem, x/y. There's more than one way
         | to do it. I can think of at least 3. One of them won't just
         | compute x/y for your particular x, but will compute x/y for ANY
         | x.
         | 
         | Accuracy is always the issue with analog stuff, but they sure
         | are neat.
         | 
         | Another fun one to contemplate is spaghetti sort. With an
         | analog computer of sufficient resolution, you can sort n
         | elements in O(n). You represent the numbers being sorted by
         | lengths of spaghetti. Then you put them on the table straight
         | up and bring a flat object down until it hits the first and
         | largest piece of spaghetti. You set that down and repeat the
         | process, selecting the largest element of the set every time.
         | 
         | I've always liked the idea of hybrid systems. I envision one
         | where you feed the analog part of your problem with a DAC, then
         | get a really close answer up to the limit of your precision
         | from the analog component, then pass that back out to an ADC
         | and you have a very very close guess to feed into a digital
         | algorithm to clean up the precision a bit. I bet you could
         | absolutely fly through matrix multiplication that way. You
         | could also take the analog output and adjust the scale so it's
         | where it needs to be on the ambiguous parts, then feed it back
         | into your analog computer again to refine your results.
        
           | TedDoesntTalk wrote:
           | > spaghetti sort
           | 
           | Isn't this how very old sorting machines with punch cards
           | worked? I'm thinking of the kinds used by the census or
           | voting machines in the late 1800s or early 1900s.
        
             | TremendousJudge wrote:
             | I think they used radix sort, which is also pretty cool
        
           | hbarka wrote:
           | Where does a doctor's stethoscope fit in? Other examples:
           | Mechanic's stethoscope for diagnosing an engine, airplane
           | vibrations to foretell maintenance, bump oscillations to
           | grade quality of a roadway.
        
         | teshier-A wrote:
         | Surprised to see no mention of LightOn and its Optical
         | Processing Unit !
        
       | willhinsa wrote:
       | The universe is already thinking for itself! It wrote this
       | comment and built this website, after all.
        
         | tabtab wrote:
         | And trying to expel humans after seeing them in action.
        
       | mrtesthah wrote:
       | Human thought _is_ the universe thinking. Life, inclusive of
       | humanity, is contiguous with deterministic physical reality.
        
       | rbn3 wrote:
       | This instantly reminded me of the paper "pattern recognition in a
       | bucket"[0], which I've seen referenced a lot when I first started
       | reading about AI in general. I only have surface-level knowledge
       | about the field, but how exactly does what's described in the
       | article differ from reservoir computing? (The article doesn't
       | mention that term, so I assume there must be a difference)
       | 
       | [0]
       | https://www.researchgate.net/publication/221531443_Pattern_R...
        
       | inasio wrote:
       | Relevant: DARPA last year launched a program (competition) to
       | build analog solvers that can solve (some) optimization problems
       | [0]:
       | 
       | [0]: https://www.darpa.mil/news-events/2021-10-04
        
       | mensetmanusman wrote:
       | Humans are the universe thinking for itself.
        
         | bobsmooth wrote:
         | Sure, but I'd prefer a computer without self-doubt.
        
           | willis936 wrote:
           | A system without introspection would never self-improve.
        
             | TedDoesntTalk wrote:
             | Does evolution introspect? did the universe evolve the
             | gecko with or without introspection?
        
               | willis936 wrote:
               | Evolution as a system does not improve, no. Evolution was
               | superseded by self-improving (human thought-based)
               | systems.
               | 
               | A gecko born did not self-improve. The species average
               | offspring improved externally via natural selection.
        
               | guerrilla wrote:
               | > Evolution was superseded by self-improving (human
               | thought-based) systems.
               | 
               | Strictly correct, but consider that our ideas are also
               | going evolution. What we learn depends on our
               | environment. We retain what's useful and don't what's not
               | and we also pass it down through generations... This is
               | pretty much natural selection, just at a different level.
        
               | mensetmanusman wrote:
               | "A gecko born did not self-improve."
               | 
               | This might not necessarily be true, for example, a
               | genetic defect that a gecko figures out how to leverage
               | through self improvement (to feed itself) might then be
               | passed on to offspring.
        
           | the_other wrote:
           | Some self doubt is critical for right thinking.
        
             | hackernewds wrote:
             | some self-doubt is right for critical thinking
        
         | guerrilla wrote:
         | Well, all animals are...
        
         | jb1991 wrote:
         | Indeed, it is said that "life is the universe's way of looking
         | back at itself."
        
         | spideymans wrote:
         | Then perhaps the best way to make the universe think for us is
         | to produce a biological computer, similar in nature to a brain.
        
         | ben_w wrote:
         | This is more like "Surprise! Turns out panpsychism was the
         | right answer all along!"
        
           | lolive wrote:
           | Ok. But then:
           | 
           | - what is the question?
           | 
           | - what is the answer?
        
             | ben_w wrote:
             | > what is the question?
             | 
             | Is the simulation hypotheses more or less plausible?
             | 
             | > what is the answer?
             | 
             | Supren supren, suben suben, maldekstra dekstra, maldekstra
             | dekstra, bee aye komenco. ;)
        
               | porkphish wrote:
               | Magic 2.0?
        
               | ben_w wrote:
               | Jes, kvankam mi ankau povis paroli Esperanto antaue legi
               | la libro (or rather, listened; audiobook), so when that
               | line happened I recognised it even faster than Martin
               | did.
        
               | danbruc wrote:
               | _Is the simulation hypotheses more or less plausible?_
               | 
               | It has always been implausible and it will most likely
               | stay that way.
        
               | mckirk wrote:
               | How so?
        
               | ClumsyPilot wrote:
               | When was this conclusion reached, i totally mised the
               | announcement
        
               | danbruc wrote:
               | Right when it was formulated. In the best case - assuming
               | the simulation hypothesis does not have any flaws, i.e.
               | there are no hidden assumptions or logical flaws or
               | something along that line - the simulation hypothesis
               | provides a trilemma, i.e. one of three things has to be
               | true. That we are living in a simulation is only one of
               | them and arguably the most implausible one.
               | 
               | But let us just assume we continue exploring and
               | inspecting our universe and one day we discover that
               | space is quantized into small cubes [1] with a side
               | length of a thousand Planck lengths just like a voxel
               | game world. Now what? Are we living in a simulation? Is
               | this proof?
               | 
               | Actually, you probably would not be any wiser. How would
               | you know whether the universe just works with small
               | voxels and we wrongly assumed all the time that space is
               | continuous or whether this universe is a simulation using
               | voxels and somewhere out there is the real universe with
               | continuous space? You do not know what a real universe
               | looks like, you do not know what a simulated universe
               | looks like, you just know what our universe looks like.
               | How will you ever tell what kind our universe is?
               | 
               | [1] This is purely hypothetical, I do not care about how
               | physically realistic this is, what kind of problems with
               | preferred reference frames or what not this might cause,
               | let us just pretend it makes sense.
        
               | mrwnmonm wrote:
               | The simulation hypothesis is so cute.
        
               | ClumsyPilot wrote:
               | Your post is not putting forward any argument about
               | Plausability or Probability, you are just saying that the
               | theory is not falsifiable / we will never fins out, like
               | argument about God.
               | 
               | The argument about probability goes something like this:
               | there is only one real universe, where an advanced
               | species like us would evolve. Eventually we would create
               | multiple simulations. If advanced specicies evolves in a
               | simulation, they create their own simulation.
               | 
               | Therefore there is only one real universe, but many
               | simulations, so chances are we are in a simulation. It
               | also could explain why we are alone in the universe.
               | 
               | Holographic theory suggest that the whole universe coupd
               | be a hologram around a 4D black hole or something, so
               | also appears to hint in this direction
        
               | danbruc wrote:
               | _Your post is not putting forward any argument about
               | Plausability or Probability [...]_
               | 
               | Maybe not with enough emphasis, but I did - the other two
               | options of the trilemma seem much more plausible.
               | 
               |  _[...] you are just saying that the theory is not
               | falsifiable / we will never fins out, like argument about
               | God._
               | 
               | This depends. If your believe includes, say, god reacts
               | to prayers, then we can most certainly test this
               | experimentally. But overall the two may be somewhat
               | similar - unless god or the creator of the simulations
               | shows up and does some really good magic tricks, it might
               | be hard to tell one way or another.
               | 
               |  _The argument about probability goes something like
               | this: there is only one real universe, where an advanced
               | species like us would evolve._
               | 
               | You do not know that there is only one universe. You do
               | not know that we qualify as an advanced species with
               | respect to cosmological standards.
               | 
               |  _Eventually we would create multiple simulations._
               | 
               | Will we? What if we go extinct before we reach that
               | capability? What if we decided that it is unethical to
               | simulate universes? What if this is not feasible
               | resource-wise?
               | 
               |  _If advanced specicies evolves in a simulation [...]_
               | 
               | Will they? Can they? I think it is a pretty fair
               | assumption that simulations in general require more
               | resources than the real system or provide limited
               | fidelity. If you want to simulate the mixing of milk in a
               | cup of coffee, you will either need a computer much
               | larger than the cup of coffee or on a smaller computer
               | the simulation will take much longer than the real
               | process or you have to use some crude fluid dynamics
               | simulation that gives you an acceptable macroscopic
               | approximation but ignores all the details like positions
               | and momenta of all the atoms. Therefore I would say that
               | any simulation can at best simulate only a small fraction
               | of the universe the simulation is running in and it is
               | not obvious that a small part would be enough to produce
               | simulated humans.
               | 
               |  _[...] they create their own simulation._
               | 
               | Everything from above applies, there are reasons why this
               | might not happen. And with every level you go down the
               | issues repeat - can and will they create simulations? And
               | the simulated universes are probably shrinking all the
               | time as well as you go deeper.
               | 
               |  _Therefore there is only one real universe, but many
               | simulations, so chances are we are in a simulation._
               | 
               | Sure, if there are many simulations and only one real
               | universe, then it might be likely that we are in a
               | simulation. Even then there are some caveats like for
               | example each simulation also has to be reasonably big and
               | contain billions of humans or they can have fewer humans
               | but then there must be more of the simulations, otherwise
               | it might still be more likely that we are not in any of
               | the simulations.
               | 
               | Anyway, this all only applies if there is such a set of
               | nested simulations, then we are probably simulated, but
               | the real question is how likely is the existence of this
               | nested simulations? Is it even possible?
               | 
               |  _It also could explain why we are alone in the
               | universe._
               | 
               | We do not know that we are alone. And even if we are
               | alone, there are more reasonable explanations then a
               | simulation. And who even says that we would be alone in a
               | simulation?
               | 
               |  _Holographic theory suggest that the whole universe
               | coupd be a hologram around a 4D black hole or something,
               | so also appears to hint in this direction_
               | 
               | It does not. The holographic principle just suggest that
               | for certain theories in n dimensions there is a
               | mathematically equivalent theory with only n-1
               | dimensions. The best known example is the AdS/CFT
               | correspondence which shows that certain theories of
               | quantum gravity based on string theory have a
               | mathematically equivalent formulation as conformal field
               | theories on the boundary of the space. Whether this is a
               | mathematical curiosity or whether this has some deep
               | reasons is everyone's guess.
        
               | mensetmanusman wrote:
               | "Right when it was formulated."
               | 
               | False, like many 'Beyond the bang' physics hypothesis,
               | these are non falsifiable claims that can still be
               | interesting to discuss since humans can think about such
               | abstractions.
               | 
               | (Note that Godel et al. showed that non-falsifiable does
               | not necessarily mean false).
        
             | zackmorris wrote:
             | The Last Question:
             | http://users.ece.cmu.edu/~gamvrosi/thelastq.html
             | 
             | The Last Answer:
             | https://www.scritub.com/limba/engleza/books/THE-LAST-
             | ANSWER-...
        
             | rgrs wrote:
             | yes
        
             | falcor84 wrote:
             | - "What do you get if you multiply six by nine?"
             | 
             | -"forty two"
        
           | hprotagonist wrote:
           | [[ liebniz chuckling in the background ]]
        
       | subless wrote:
       | Well Tesla did say and I quote "If you want to find the secrets
       | of the universe, think in terms of energy, frequency and
       | vibration."
       | 
       | AND
       | 
       | "The day science begins to study non-physical phenomena, it will
       | make more progress in one decade than in all the previous
       | centuries of its existence."
       | 
       | I think we'll make great progress if we eat those words.
        
       | FredPret wrote:
       | I wonder if we'll end up with a hyper-intelligent shade of the
       | colour blue
        
       | discreteevent wrote:
       | When I first came across machine learning it reminded me of
       | control theory. And sure enough if you search around you get to
       | articles like this [1] saying that neural networks were very much
       | inspired by control theory. The bit of control theory that I was
       | taught way back was about analog systems. I have no idea if the
       | electronic circuit mentioned at the end is even like a classical
       | control system but it does feel a bit like something coming
       | around full circle.
       | 
       | [1] https://scriptedonachip.com/ml-control
        
       | EricBurnett wrote:
       | I've long been enamored with the idea of learning from analog
       | computers to build the next generation of digital ones. In some
       | perspective all our computers are analog, of a sort - today's
       | computer chips are effectively leveraging electron flow through a
       | carefully arranged metal/silicon substrate, with self-
       | interference via electromagnetic fields used to construct
       | transistors and build up higher order logic units. We're now
       | working on photonic computers, presumably with some new property
       | leading to self interference, and allowing transistors/logic
       | above that.
       | 
       | "Wires" are a useful convenience in the electron world, to build
       | pathways that don't degrade with the passing of the elections
       | themselves. But if we relax that constraint a bit, are there
       | other ways we can build up arrangements of "organized flow"
       | sufficient to have logic units arise? E.g. imagine pressure waves
       | in a fluid -filled container, with mini barriers throughout
       | defining the possible flow arrangement that allows for
       | interesting self-reflections. Or way further out, could we use
       | gravitational waves through some dense substance with carefully
       | arranged holes, self-interfering via their effect on space-time,
       | to do computations for us? And maybe before we get there, is
       | there a way we could capitalize on the strong or weak nuclear
       | force to "arrange" higher frequency logical computations to
       | happen?
       | 
       | Physics permits all sorts of interactions, and we only really use
       | the simple/easy-to-conceptualize ones as yet, which I hope and
       | believe leaves lots more for us to grow into yet :).
        
         | 323 wrote:
         | > _It employs two-dimensional quasiparticles called anyons,
         | whose world lines pass around one another to form braids in a
         | three-dimensional spacetime (i.e., one temporal plus two
         | spatial dimensions). These braids form the logic gates that
         | make up the computer. The advantage of a quantum computer based
         | on quantum braids over using trapped quantum particles is that
         | the former is much more stable._
         | 
         | https://en.wikipedia.org/wiki/Topological_quantum_computer
        
         | sandworm101 wrote:
         | Electricity is also a wave. The wires are essentially
         | waveguides for particles/waves traveling at near luminal
         | speeds. So in theory anything done with electricity could be
         | replicated using other waves, but to make it faster you would
         | need waves that travel faster than electrons through a wire.
         | Photons through a vacuum might be marginally faster, but
         | pressure waves though a fluid would not.
         | 
         | If bitflips are a problem in a modern chip, imagine the number
         | of problems if your computer ran on gravity waves. The
         | background hum of billions of star collisions cannot be blocked
         | out with grounded tinfoil. There is no concept of a faraday
         | cage for gravity waves.
        
           | lupire wrote:
           | Gravity is a poor source of computation because it is
           | incredibly weak - 10^-43 vs electron force. Even if you add
           | several powers of 10 for all the metal wire harness and
           | battery chemistry around the electrons, you still get far
           | more usable force per gram from electricity and metal than
           | you do from gravity.
        
             | otikik wrote:
             | Think Big.
             | 
             | A computer that's also a Galaxy.
        
               | alephxyz wrote:
               | With latency measurable in millennias
        
               | cjsawyer wrote:
               | Have we checked to see if this is already the case?
        
           | stochtastic wrote:
           | Nitpick: gravity waves [1] pretty universally refer to waves
           | in fluid media in which the restoring force is buoyancy.
           | Ripples in spacetime are usually called _gravitational_
           | waves.
           | 
           | [1] https://en.wikipedia.org/wiki/Gravity_wave
           | 
           | [2] https://en.wikipedia.org/wiki/Gravitational_wave
        
           | altruios wrote:
           | A faraday cage for gravity waves would be awesome... I mean -
           | computers are nice - but you hit the nail on the head for
           | revolutionary tech.
        
           | markisus wrote:
           | Is it even theoretically possible to waveguide gravity? The
           | electric field can be positive and negative, but gravity is
           | unsigned -- there is no anti-gravity. This is probably
           | related to what you're saying about faraday cages.
        
             | whatshisface wrote:
             | Gravitational waves can either stretch or contract
             | spacetime relative to a baseline. Since the Einstein field
             | equations are nonlinear, I think gravitational waves can be
             | "refracted" when traveling through a region with a high
             | baseline curvature, so maybe waveguides are possible.
             | Gravitational lenses do lens gravitational waves in
             | addition to light.
        
             | Optimal_Persona wrote:
             | It's not unsigned, if you look on the back it says "Come
             | together, you all. Love, The Universe." ;-)
        
               | robotresearcher wrote:
               | Gravity is antigravity if you run time backwards.
        
         | huachimingo wrote:
         | Its like procedural generation: hide the data into a
         | formula/algorithm, so it makes less space.
         | 
         | Replace "data" with "computation", and "formula" with physical,
         | less expensive processes.
        
       | toss1 wrote:
       | The subheader: >>Physicists are building neural networks out of
       | vibrations, voltages and lasers, arguing that the future of
       | computing lies in exploiting the universe's complex physical
       | behaviors.
       | 
       | I.e., analog can do insane levels computing (it's had 13+ billion
       | years to evolve, but digital computing is easier to think about,
       | so, like the hapless drunkard looking for his lost key under the
       | streetlight because it'll be easier to see (instead of where he
       | most likely dropped it), we pursue digital because it's easier to
       | reason about. TBF, digital does yield bigger results much more
       | quickly and flexibly, but some really interesting problems will
       | likely require further exploration of the analog computing space.
        
       | [deleted]
        
       | momenti wrote:
       | I wonder what kind of speedup can we expect from neuromorphic
       | computing within the next 5-10 years?
        
       | shadowgovt wrote:
       | This is why real computer science is pencil-and-paper work, not a
       | sub-field of electronics.
       | 
       | Electronics is great because we can create some specific fast,
       | reproducible physical phenomena with it (logic gates, symbol
       | storage and retrieval). But any physical principle that can
       | create fast, reproducible phenomena would be just as valuable for
       | computing. _Diamond Age_ posits smart-books that operate on
       | atomic-scale  "rod logic" mechanical phenomena. Cells do
       | something that looks an awful lot like computation with protein
       | chemistry.
        
       | lisper wrote:
       | A better title would have been: how to make the universe do (a
       | whole lot of) math for us [1]. What so-called neural networks do
       | should not be confused with thinking, at least not yet.
       | 
       | And the fact that we can get the universe to do math for us
       | should not be surprising: we can model the universe with math, so
       | of course that mapping works in the other direction as well. And
       | this is not news. There were analog computers long before there
       | were digital ones.
       | 
       | ---
       | 
       | [1] ... using surprisingly small amounts of hardware relative to
       | what a digital computer would require for certain kinds of
       | computations that turn out to be kind of interesting and useful
       | in specific domains. But that's not nearly as catchy as the
       | original.
        
         | hans1729 wrote:
         | assertion: thinking is synonymous with computation (composed
         | operations on symbolic systems).
         | 
         | computation is boolean algebra.
         | 
         | -> therefore, doing math _is_ to think.
         | 
         | I'm not trying to be pedantic, I just don't think using
         | intuitive associations with words helps clarifying things. If
         | your definition for thought diverges here, please try to
         | specify how exactly: what is thought, then? Semi-autonomous
         | "pondering"? Because the closer I look at it, that, too,
         | becomes boolean algebra, calling eval() on some semantic
         | construct, which boils down to symbolic logic.
         | 
         | What you may mean is that "neural" networks are performing
         | statistics instead of algebra, but that's not what the article
         | is about, is it?
        
           | mannykannot wrote:
           | > If your definition for thought diverges here, please try to
           | specify how exactly: what is thought, then?
           | 
           | This is a burden-shifting reply of "so prove me wrong!" to
           | anyone who feels that your assertion lacks sufficient
           | justification for it to be taken as an axiom.
        
             | DANK_YACHT wrote:
             | The original commenter also made a random assertion: "doing
             | math is not thinking." The person you're responding to
             | attempted to provide a definition of "thinking."
        
               | mannykannot wrote:
               | The original commenter's comment does not contain this
               | claim. I suppose it could have been edited, though by the
               | time I saw it, I believe the window for editing had
               | closed.
               | 
               | Neither what lisper actually says nor what hans1729
               | replied with are random assertions, and, furthermore,
               | they are each entitled to assert whatever axioms they
               | like - but anyone wanting others to accept their axioms
               | should be prepared to assume the burden of presenting
               | reasons for others to do so.
        
           | meroes wrote:
           | Is a ruler and compass computation? They don't operate
           | symbolically and are computers.
        
           | sweetdreamerit wrote:
           | > I don't think using intuitive associations with words helps
           | clarify things Sincere question: do you _think_ that  "think
           | using intuitive associations with words" can be safely
           | translated to "compute using intuitive associations with
           | words"? I don't _think_ so. Therefore, even if thinking is
           | also computing, reducing thinking to boolean algebra is a
           | form of reductionism that ignores a number of _emergent_
           | properties of (human) thinking.
        
             | hans1729 wrote:
             | Fair question/point. Yes, I do think so.
             | 
             | The intuitive model associated with some variable/word as a
             | concept relates to other structures/models/systems that it
             | interfaces with. Just because the operator that accesses
             | these models with rather vague keys (words) has no clear
             | picture of what exactly is being computed on the surface,
             | doesn't mean that the totality of the process is not
             | computation. It just means that the emergent properties are
             | not mapped into the semantic space which the operator (our
             | attention mechanisms) operates on. From my understanding,
             | the totality I just referred to is a graph-space, it
             | doesn't escape mathematics. Then again, I can't _know_ or
             | claim to do so.
        
         | troyvit wrote:
         | We can model the universe with math because math is what we
         | have to model the universe with. The fact that it can talk back
         | to us in math is amazing because to me it means that math is
         | not a dead end cosmically, which means we might be able to use
         | it to communicate with other intelligences after all.
        
         | misja111 wrote:
         | > we can model the universe with math, so of course that
         | mapping works in the other direction as well.
         | 
         | This is not so obvious as you make it appear. For instance, we
         | can model the weather for the next couple of days using math.
         | But letting the weather of the next couple of days calculate
         | math for us doesn't work very well. The reason is that we can't
         | set the inputs for the weather.
         | 
         | This problem comes up in various forms and shapes in other
         | 'nature computers' as well. Quantum computers are another
         | example where the model works brilliantly but setting the pre-
         | and side conditions in the real world is a major headache.
        
           | goldenkey wrote:
           | You can use the weather or a bucket of water or well, any
           | sufficiently complex chaotic system, as a reservoir computer
           | though:
           | 
           | https://en.wikipedia.org/wiki/Reservoir_computing
        
             | yetihehe wrote:
             | Reservoir computing still needs to provide input. How do
             | you input into weather? And if you find that you indeed can
             | provide inputs to weather, you should ask if you _should_
             | provide inputs into weather. Using weather as a computer
             | might simply be unethical and could get you killed (after
             | some angry farmers come knocking on your lab door because
             | you ruined their harvest).
        
           | lisper wrote:
           | I didn't mean to imply that implementing it should be easy.
           | Only that it should be unsurprising that it is possible.
        
         | zmgsabst wrote:
         | > What so-called neural networks do should not be confused with
         | thinking, at least not yet.
         | 
         | I disagree:
         | 
         | I think neural networks are learning an internal language in
         | which they reason about decisions, based on the data they've
         | seen.
         | 
         | I think tensor DAGs correspond to an implicit model for some
         | language, and we just lack the tools to extract that. We can
         | translate reasoning in a type theory into a tensor DAG, so I'm
         | not sure why people object to that mapping working the other
         | direction as well.
        
           | V__ wrote:
           | This internal language, if I'm not mistaken, is exactly what
           | the encoder and decoder parts of the neural networks do.
           | 
           | > in which they reason about decisions
           | 
           | I'm in awe of what the latest neural networks can produce,
           | but I'm wary to call it "reasoning" or "deciding". NNs are
           | just very complex math equations and calling this
           | intelligence is, in my opinion, muddying the waters of how
           | far away we are from actual AI.
        
             | andyjohnson0 wrote:
             | > I'm in awe of what the latest neural networks can
             | produce, but I'm wary to call it "reasoning" or "deciding".
             | 
             | I think humans find it quite difficult to talk about the
             | behaviour of complex entities without using language that
             | projects human-like agency onto those entities. I suspect
             | its the way that our brains work.
        
               | Banana699 wrote:
               | Indeed, I'm an atheist who absolutely loves biology. I
               | adore all the millions upon millions of tiny and huge
               | complex machines that Evolution just spits left and right
               | merely by being a very old, very brutal, and very stupid
               | simulation running for 4 billion years straight on a
               | massive, inefficiently-powered distributed processor.
               | 
               | And I can never shake the unconscious feeling that all
               | this is _purposeful_ , the idea that all this came by
               | literally throwing shit at a wall and only allowing what
               | sticks to reproduce warps my mind into unnatural
               | contortions. The sheer amount of _order_ that life is,
               | the sheer regularity and unity of purpose it represents
               | amidst the soup of dead that is the universe. It 's...
               | unsettling?
               | 
               | Which is why I personally think the typical "Science^TM"
               | way of argument against traditional religions misguided.
               | Typical religons already make the task of refuting them a
               | thousand time easier by assuming a benevolent creator,
               | which a universe like ours, with a big fat Problem Of
               | Evil slapped on its forehead, automatically refutes for
               | you.
               | 
               | But the deeper question is whether there is/are
               | Creator(s) at all: ordered, possibly-intellignet (but
               | most definitely not moral by _any_ human standards)
               | entities, which spewed this universe in some manner that
               | can be approximated as purposeful (or even, perhaps, as a
               | by-product of doing a completely unrelated activity, like
               | they created our universe on accident while,or as a
               | result of, doing another activity useful to them, like
               | accidental pregnancies to us humans). This is a far more
               | muddled and interesting question and "Science" emites
               | much more mixed signals than straight answers.
        
             | zmgsabst wrote:
             | > NNs are just very complex math equations
             | 
             | So is the equation modeling your brain.
             | 
             | > This internal language, if I'm not mistaken, is exactly
             | what the encoder and decoder parts of the neural networks
             | do.
             | 
             | The entire ANN is also a model for a language, with the
             | "higher" parts defining what terms are legal and the
             | "lower" defining how terms are constructed. Roughly.
             | 
             | > I'm in awe of what the latest neural networks can
             | produce, but I'm wary to call it "reasoning" or "deciding".
             | 
             | What do you believe you do, besides develop an internal
             | language in response to data in which you then make
             | decisions?
             | 
             | The process of ANN evaluation is the same as fitting terms
             | in a type theory and producing an outcome based on that
             | term. We call that "reasoning" in most cases.
             | 
             | I don't care if submarines "swim"; I care they propel
             | themselves through the water.
             | 
             | > calling this intelligence is, in my opinion, muddying the
             | waters of how far away we are from actual AI
             | 
             | Goldfish show mild intelligence because they can learn
             | mazes; ants farm; bees communicate the location of food via
             | dance; etc.
             | 
             | I think you're the one muddying the waters by placing some
             | special status on human like intelligence without
             | recognizing the spectrum of natural intelligence and that
             | neural networks legitimately fit on that spectrum.
        
               | tdehnel wrote:
               | But the spectrum is an illusion. It's not like humans are
               | just chimpanzees (or ants or cats) with more compute.
               | 
               | Put differently, if you took an ant or cat or chimpanzee
               | and made it compute more data infinitely faster, you
               | wouldn't get AGI.
               | 
               | Humans can do something fundamentally unique. They are
               | _universal explainers_. They can take on board any
               | explanation and use it for creative thought instantly.
               | They do not need to be trained in the sense that neural
               | nets do.
               | 
               | Creating new ideas, making and using explanations, and
               | critiquing our own thoughts is what makes humans special.
               | 
               | You can't explain something to a goldfish and have it
               | change its behavior. A goldfish isn't thinking "what if I
               | go right after the third left in the maze".
               | 
               | Credit to David Deutsch for these ideas.
        
               | zmgsabst wrote:
               | > Put differently, if you took an ant or cat or
               | chimpanzee and made it compute more data infinitely
               | faster, you wouldn't get AGI.
               | 
               | Citation needed.
               | 
               | > They can take on board any explanation and use it for
               | creative thought instantly. They do not need to be
               | trained in the sense that neural nets do.
               | 
               | This is patently false: I can explain a math topic people
               | can't immediately apply -- and require substantial
               | training (ie, repeated exposure to examples of that data)
               | to get it correct... if they ever learn it at all. Anyone
               | with a background in tutoring has experienced this claim
               | being false.
               | 
               | > Creating new ideas, making and using explanations, and
               | critiquing our own thoughts is what makes humans special.
               | 
               | Current AI has approaches for all of these.
        
               | tdehnel wrote:
               | >Citation needed.
               | 
               | That's a lazy critique. With a lack of concrete evidence
               | either way, we can only rely on the best explanation
               | (theory). What's your explanation for how an AGI is just
               | an ant with more compute? I've given my explanation for
               | why it's not: an AGI would need to have the ability to
               | create new explanatory knowledge (i.e. not just
               | synthesize something that it's been trained to do).
               | 
               | As an example, you can currently tell almost any person
               | (but certainly no other animal or current AI) "break into
               | this room without destroying anything and steal the most
               | valuable object in it". Go ahead and try that with a
               | faster ant.
               | 
               | On your tutoring example, just because a given person
               | doesn't use their special capabilities doesn't mean they
               | don't have them. Your example could just as easily be
               | interpreted to mean that tutors just haven't figured out
               | how to tutor effectively. As a counter example, would you
               | say your phone doesn't have the ability to run an app
               | which is not installed on it?
               | 
               | >Current AI has approaches for all these.
               | 
               | But has it solved them? Or is there an explanation as to
               | why it hasn't solved them yet? What new knowledge has AI
               | created?
               | 
               | I know as a member of a ML research group you _really
               | want_ current approaches to be the solution to AGI. We
               | are making progress I admit. But until we can explain how
               | general intelligence works, we will not be able to
               | program it.
        
               | [deleted]
        
               | TaupeRanger wrote:
               | Variations of this exact argument happen in every single
               | comment thread relating to AI. It's almost comical.
               | 
               | "The NN [decides/thinks/understands]..."
               | 
               | "NNs are just programs doing statistical computations,
               | they don't [decide/think/understand/"
               | 
               | "Your brain is doing the same thing."
               | 
               | "Human thought is not the same as a Python program doing
               | linear algebra on a static set of numbers."
               | 
               | And really, I can't agree or disagree with either premise
               | because I have two very strong but very conflicting
               | intuitions: 1) human thought and consciousness is
               | qualitatively different from a Python program doing
               | statistics. 2) the current picture of physics leaves no
               | room for such a qualitative difference to exist - the
               | character of the thoughts (qualia) must be illusory or
               | epiphenomenal in some sense
        
               | zmgsabst wrote:
               | I don't think those are in conflict: scale has a quality
               | all its own.
               | 
               | I'm not claiming AI have anything similar to human
               | psychology, just that the insistence they have _zero_
               | "intelligence" is in conflict with how we use that word
               | to describe animals: they're clearly somewhere between
               | bees /ants and dogs.
        
               | TaupeRanger wrote:
               | The conflict is that, at one point (the Python program)
               | there are no qualities - just behaviors, but at some
               | point the qualities (which are distinct phenomena)
               | somehow enter in, when all that has been added in
               | physical terms is more matter and energy.
        
               | V__ wrote:
               | > by placing some special status on human like
               | intelligence without recognizing the spectrum of natural
               | intelligence
               | 
               | You're right, yes, if you see it as the whole spectrum,
               | sure. I was more thinking about the colloquial meaning of
               | an AI of human-like intelligence. My view was therefore
               | from a different perspective:
               | 
               | > So is the equation modeling your brain.
               | 
               | I would argue that is still open to debate. Sure, if the
               | universe is deterministic, then everything is just one
               | big math problem. If there is some natural underlying
               | randomness (quanta phenomena etc.) then maybe there is
               | more than deterministic math to it.
               | 
               | > We call that "reasoning" in most cases.
               | 
               | Is a complex if-else-structure reasoning? Reasoning, to
               | my, implies some sort of consciousness, and being able to
               | "think". If a neural network doesn't know the answer,
               | more thinking won't result in one. A human can (in some
               | cases) reason about inputs and figure out an answer after
               | some time, even if they didn't know it in the beginning.
        
               | zmgsabst wrote:
               | > I was more thinking about the colloquial meaning of an
               | AI of human-like intelligence.
               | 
               | Then it sounds like we're violently agreeing -- I
               | appreciate you clarifying.
               | 
               | I try to avoid that mindset, because it's possible that
               | AI will become intelligent in a way unlike our own
               | psychology, which is deeply rooted in our evolutionary
               | history.
               | 
               | My own view is that AI aren't human-like, but are
               | "intelligent" somewhere between insects and dogs. (At
               | present.)
               | 
               | > If a neural network doesn't know the answer, more
               | thinking won't result in one.
               | 
               | I think reinforcement learning contradicts that, but
               | current AIs don't use that ability dynamically. But GAN
               | cycles and adversarial training for, eg, go suggest that
               | AIs given time to contemplate a problem can self-improve.
               | (That is, we haven't implemented it... but there's also
               | no fundamental roadblock.)
        
       | pseudolus wrote:
       | I believe one of the earliest applications incorporating this
       | line of thought was MONIAC, the Monetary National Income Analogue
       | Computer, which used water levels to model the economy [0].
       | There's a short youtube documentary on its history and operation.
       | [1]
       | 
       | [0] https://en.wikipedia.org/wiki/MONIAC
       | 
       | [1]
       | https://www.youtube.com/watch?v=rAZavOcEnLg&ab_channel=Reser...
       | 
       | https://youtu.be/rAZavOcEnLg?t=101 (shows operation of MONIAC)
        
         | wardedVibe wrote:
         | Analog computers are from the 19th century; they were used to
         | decompose signals using the Fourier transform, since it's
         | easy(ish) to get a bunch of different frequency oscillators.
         | They used them for tides and differential equations.
         | https://en.m.wikipedia.org/wiki/Analog_computer
        
       ___________________________________________________________________
       (page generated 2022-06-01 23:01 UTC)