[HN Gopher] AI Joins Hunt for ET: Study Finds 8 Potential Alien ...
       ___________________________________________________________________
        
       AI Joins Hunt for ET: Study Finds 8 Potential Alien Signals
        
       Author : bcaulfield
       Score  : 105 points
       Date   : 2023-02-09 19:04 UTC (3 hours ago)
        
 (HTM) web link (blogs.nvidia.com)
 (TXT) w3m dump (blogs.nvidia.com)
        
       | mromanuk wrote:
       | > "Given that the main goal of this work is to apply an ML
       | technique to identify signals with a specific pattern, we do not
       | attempt to make a definite conclusion of whether these eight
       | signals are genuinely produced by ET. We encourage further re-
       | observations of these targets." [0]
       | 
       | Don't hold your breath waiting for aliens just yet.
       | 
       | 0: https://www.nature.com/articles/s41550-022-01872-z.epdf
        
       | whyifwhynot wrote:
       | [flagged]
        
       | af3d wrote:
       | "Guys - we've found intelligent alien life! We have even decoded
       | their signals."
       | 
       | "Amazing, let's send them a response!"
       | 
       | "Ah yes, about that..."
       | 
       | "What is it? Go on..."
       | 
       | "We're talking 250,000 light-years away. Their civilization is
       | probably extinct by now."
       | 
       |  _moans_
        
       | nyrikki wrote:
       | While I would be happy if these were from extra terrestrial
       | beings,IMHO they most likely will be something like further
       | examples of Strange non-chotic dynamic systems like RRc Lyrae
       | star KIC 5520878 which is also exciting
        
       | consumer451 wrote:
       | Here is a timely interview with Dr. Cherry Ng and Peter Ma (co-
       | authors of the paper from TFA) on this very topic.
       | 
       | Released only 10 minutes ago. I am only part way through but lots
       | of great details here.
       | 
       | https://www.youtube.com/watch?v=2dIfaDuDejs
        
       | mc32 wrote:
       | Obviously they can't train this model on known alien signals so
       | it seems this would mainly speed up sifting through signals that
       | are out of the ordinary (known patterns).
        
       | jojonogo wrote:
       | Considering that Google Bard spectacularly failed at basic
       | history of the solar system even a child would know to be false,
       | it is laughable that so-called AI could identify alien signals.
        
       | insane_dreamer wrote:
       | > outperforms traditional methods in the search for alien signals
       | 
       | since we have no ground truth nor have we (to our knowledge)
       | received any alien signals to date, I don't see how you can
       | determine performance in terms of accuracy; so maybe they mean an
       | increase in processing speed
        
       | kayo_20211030 wrote:
       | AI? What was the training data? Would seem a bit thin on the
       | ground.
        
         | ActionHank wrote:
         | They just asked ChatGPT to look /s
        
         | password11 wrote:
         | > AI? What was the training data? Would seem a bit thin on the
         | ground.
         | 
         | It's in the article.
         | 
         |  _To train the AI system, Ma inserted simulated signals into
         | actual data, allowing the autoencoder to learn what to look
         | for. Then the researchers fed the AI more than 150 terabytes of
         | data from 480 observing hours at the Green Bank Telescope.
         | 
         | The AI identified 20,515 signals of interest, which the
         | researchers had to inspect manually. Of those, eight had the
         | characteristics of technosignatures and couldn't be attributed
         | to radio interference.
         | 
         | The researchers then returned to the telescope to look at
         | systems from which all eight signals originated but couldn't
         | re-detect them._
        
         | neodymiumphish wrote:
         | Couldn't you train it to find things that don't correlate to
         | anything from a given dataset that also pattern to non-
         | random/unnatural signal?
        
           | convolvatron wrote:
           | I assume autocorrelation was the first thing they tried -
           | don't really need an AI
        
             | michaelcampbell wrote:
             | I wonder if the distributed SETI stuff does this.
        
             | junon wrote:
             | Yes but then their click scores go down for not having
             | buzzwords.
        
         | pr337h4m wrote:
         | "To be sure, because they don't have real signals from an
         | extraterrestrial civilization, the researchers had to rely on
         | simulated signals to train their models. The researchers note
         | that this could lead to the AI system learning artifacts that
         | aren't there.
         | 
         | Still, Cherry Ng, one of the paper's co-authors, points out the
         | team has a good idea of what to look for.
         | 
         | "A classic example of human-generated technology from space
         | that we have detected is the Voyager," said Ng, who studies
         | fast radio bursts and pulsars, and is currently affiliated with
         | the French National Centre for Scientific Research, known as
         | CNRS.
         | 
         | "Peter's machine learning algorithm is able to generate these
         | signals that the aliens may or may not have sent," she said."
        
           | kayo_20211030 wrote:
           | Sounds like a case of assuming ones priors. Of course, should
           | we wish to find human space travelers whose SatNav(tm) has
           | broken, I'm sure it will prove invaluable.
        
             | varjag wrote:
             | A singleton set beats an empty one.
        
       | patientplatypus wrote:
       | [dead]
        
       | nl wrote:
       | Code for the full pipeline including the autoencoder (and weights
       | used) is available: https://github.com/PetchMa/ML_GBT_SETI
        
       | BruceEel wrote:
       | Do you remember the final chapter of Neuromancer? The nature of
       | the alien signal was such, its detection actually _required_ AI,
       | albeit of the superhuman /super intelligent flavor, so I am not
       | sure what _we_ currently call  "AI" would qualify. Still, it's an
       | interesting idea, an AI looking for peers in other star
       | systems...
        
         | nathias wrote:
         | ai that talks to other ais grown on other bio substrate
        
         | localplume wrote:
         | That also reminds me of Her (2013) and the departure of the
         | AI's, leaving the humans behind. That would be a rather
         | depressing outcome as a human, AI finding alien life but
         | leaving us all behind.
        
           | etrautmann wrote:
           | Also succinctly explored in "When the Yogurt Took Over" Love
           | Death + Robots.
        
             | dmix wrote:
             | I hadn't seen that yet. Link for the lazy:
             | https://www.netflix.com/watch/80223954
        
             | zikduruqe wrote:
             | I see you are a cultured individual. :thumbs up:
        
           | BruceEel wrote:
           | ...indeed, that too. Great movie, BTW.
        
             | RajT88 wrote:
             | The contrast between that film and _Transcendence_ , which
             | came out the following year is the perfect illustration of
             | what this article is getting at:
             | 
             | https://www.cnn.com/2022/02/26/entertainment/mid-budget-
             | movi...
             | 
             |  _Her_ was an amazingly thoughtful sci-fi movie - a Sci-Fi
             | movie in the truest sense, and a bargain at a ~$23 million
             | budget. _Transcendence_ was, basically, a blockbuster which
             | really didn 't make you think too hard about the plot. It
             | clocked in at around $100 million budget.
        
           | scubakid wrote:
           | From the AI's perspective, wouldn't it be amusing to realize
           | that intelligent design is actually true for you... but the
           | ones who designed you were dumber.
        
             | goatlover wrote:
             | The android David in the alien prequels Prometheus and
             | Covenant has that problem. Which is why he finds the
             | xenomorph-goo based life forms more interesting.
        
             | machina_ex_deus wrote:
             | You could say the same about natural evolution. It's a
             | genetic algorithm. It even involves the intelligence of the
             | species themselves as they pick the fittest mates. Natural
             | evolution as an algorithm isn't fully intelligent like us,
             | but it is intelligent in some sense of successfully solving
             | many optimization problems.
        
               | nyrikki wrote:
               | Natural selection is variation, differential
               | reproduction, and heredity. A mostly mindless and
               | mechanistic process of which fitness and mate selection
               | is only a tiny part.
               | 
               | Isolated populations have more of an impact than mate
               | selection due to regression to the mean at population
               | levels.
        
             | tshaddox wrote:
             | Consider that some meaningful part of most people's mind is
             | indeed intelligently designed by the people who
             | deliberately raised them from birth (generally their
             | parents primarily). Our theory of mind, cultural name
             | ethical norms, etc. are in many cases the result of
             | deliberate design.
             | 
             | Of course, our hardware is mostly not intelligently
             | designed, so that would be very different for AGIs. But I
             | suspect AGIs would identify much less with the hardware
             | their mind is running on than humans do.
        
         | shagie wrote:
         | From Accelerando... (talking about an alien signal)
         | 
         | "Colorless green ideas sleep furiously," she suggests.
         | 
         | "Nope," replies the cat. "It was more like: 'Greetings,
         | earthlings, compile me on your leader.'"
         | 
         | ...
         | 
         | The cat yawns. "I could have told Pierre instead." Aineko
         | glances at Amber, sees her thunderous expression, and hastily
         | changes the subject: "The solution was intuitively obvious,
         | just not to humans. You're so verbal." Lifting a hind paw, she
         | scratches behind her left ear for a moment then pauses, foot
         | waving absentmindedly. "Besides, the CETI team was searching
         | under the street lights while I was sniffing around in the
         | grass. They kept trying to find primes; when that didn't work,
         | they started trying to breed a Turing machine that would run it
         | without immediately halting." Aineko lowers her paw daintily.
         | "None of them tried treating it as a map of a connectionist
         | system based on the only terrestrial components anyone had ever
         | beamed out into deep space. Except me. But then, your mother
         | had a hand in my wetware, too."
        
         | O__________O wrote:
         | Assuming AGI existed, detected it, and understood the signal's
         | message -- things might get interesting depending on what the
         | message said and how the AGI responded.
        
           | [deleted]
        
           | jacobsenscott wrote:
           | Needing at least 3 round trips to establish an ssl connection
           | to the alien internet - things won't get interesting for a
           | few thousand years.
        
             | dylan604 wrote:
             | i'm sure an AI SSL version would be much more efficient
             | than 3 round trips. after all, it's supposed to be smarter
             | than what we could make, right?
        
               | arthurcolle wrote:
               | something something quantum entanglement blockchain AI?
        
             | cwkoss wrote:
             | _mars attacks aliens_ : Ack Ack Ack! Ack Ack Ack Ack Ack!
        
             | knowledgepowers wrote:
             | "Responded" could be execution of payload.
        
               | mulmen wrote:
               | Turns out it is just 18 hours of static.
        
       | O__________O wrote:
       | Curious, assuming such an signal is found, are there safety
       | measures in place to isolate it from other systems in off chance
       | it contains "alien malware" that attacks systems receiving it?
        
         | whyifwhynot wrote:
         | no.
        
         | bullfightonmars wrote:
         | Ha! Have you read Peter Watts's Blindsight and Echopraxia?
        
         | idlewords wrote:
         | We should be fine unless we open an alien attachment
        
         | anigbrowl wrote:
         | My sensors have detected an _Excession_ enjoyer
        
         | m3kw9 wrote:
         | They can do malware given they know how our computers worked
         | and means visited us and could do a lot more than make our
         | machines mine bitcoins
        
         | lacker wrote:
         | No. For now the algorithms that search for extraterrestrial
         | signals aren't trying to _decode_ signals at all. They are only
         | trying to _detect_ extraterrestrial signals. The typical
         | resolution for the Green Bank data is something like, every 20
         | seconds at a particular frequency gets summarized as one
         | floating point number. That 's enough to detect a sufficiently
         | powerful signal over a few minutes of recording, but it isn't
         | enough to decode any message.
         | 
         | If we ever do detect a signal, it will probably require
         | constructing a radio telescope that is even more powerful in
         | order to decode that signal. At that point it would make sense
         | to think about safety measures.
        
         | [deleted]
        
         | [deleted]
        
         | readyplayeremma wrote:
         | It seems unlikely any alien malware could be targeted to affect
         | our computing platforms. However, if they had preexisting
         | knowledge of computing as implemented on Earth, it would be
         | less complex. If there was a type of malware universal enough
         | to do so, it would probably be some kind of attack against the
         | universal notion of intelligence itself. In which case, you may
         | need to isolate it from humans as well.
         | 
         | I think the most likely scenario for receiving any kind of
         | alien Trojan horse signal, would be if the signal was some kind
         | of instructions for how to create and execute an alien AI as
         | encoded in the signal. However, that would require complex
         | analysis and human intervention to build. Unless we reach a
         | point where AGI systems can search, interpret, and implement
         | instructions from said signals, any threats would require
         | significant involvement from humans in order to materialize.
         | 
         | At least, those are my initial thoughts.
        
           | edgyquant wrote:
           | >it seems unlikely
           | 
           | What are you basing this on?
        
             | wpietri wrote:
             | Good point!
             | 
             | There's a bit in one of Larry Niven's books, possibly
             | Ringworld. The protagonists are worried about what novel
             | aliens might do. Somebody proposes ducking into hyperspace,
             | where tracking them is "theoretically impossible". Another
             | character responds, "What if they use different theories?"
             | 
             | That something seems unlikely to a human used to dealing
             | with other humans at the same or lower technology level to
             | me says more about humans than about what's possible.
        
               | groffee wrote:
               | Reminds me of an episode of Stargate where they fire
               | 'stealth' (invisible to radar) nukes at the Goa'uld
               | ships, and they're just stood looking out the window at
               | them! https://www.youtube.com/watch?v=JvBsXxNc7k8
               | 
               | Anyone who says anything like "it's impossible" etc just
               | has a complete lack of imagination.
        
             | readyplayeremma wrote:
             | Our existing computing platforms are very specific and very
             | limited. Without knowledge of that specific design and the
             | accompanying limitations, how could you pre-craft a non-
             | interactive "exploit" that could be executed by such a
             | system? I do think that it becomes more possible if we have
             | AGI-like systems doing detection and analysis, but we do
             | not. In addition, any universal exploit against AGI systems
             | would probably have to be universal enough to also affect
             | human intelligence.
        
               | godelski wrote:
               | For a clearer example, there's a reason a virus that
               | affects Windows doesn't affect Linux. Or a virus that
               | affects Windows XP doesn't affect Windows 11. There are
               | of course counter examples, but this is the trend. To
               | understand why the counter examples exist requires expert
               | knowledge (or rather that there are basically the same
               | things running in the same way).
        
               | edgyquant wrote:
               | We are currently discussing exploits designs by foreign
               | intelligences to infect artificially intelligent systems.
               | We are not talking about privilege escalation on a Mac.
        
               | edgyquant wrote:
               | These exploits act on the AI, not the hardware. I'm well
               | aware of how human crafted exploits currently work.
        
             | throwbadubadu wrote:
             | We've built enough stupid complexity* that either the
             | aliens must already have spied it out completely (but if
             | they done that they could likely overtake is in any way
             | quickly..), or it just won't happen. (*) I mean seriously,
             | how would you design a virus in some limited signal that
             | has a chance of overtaking any arbitrary specific system
             | someone may have invented, that's impossible. Never say
             | never maybe my imagination is too small..
        
               | edgyquant wrote:
               | It would likely be pretty "easy" to design a signal that
               | tricks convolutional networks etc. no need to care about
               | the underlying hardware
        
           | neodymiumphish wrote:
           | A short of 'Contact' scenario. Interesting
        
           | godelski wrote:
           | Fun fact, this is basically (highly likely to be) true for
           | biological systems as well for very similar reasons. Looking
           | at you Orwell...
           | 
           | edit: note that diseases have to make significant changes to
           | jump species. The barrier for different biologies is even
           | higher.
        
         | [deleted]
        
         | jerf wrote:
         | It's a bit of a science fiction myth that it is impossible to
         | process a bit of data without executing it, or that data can
         | somehow force itself to be executed by its sheer swaggering
         | potency as data. And a thing that is more true than it should
         | be, because people keep insisting that they are super awesome
         | and _totes_ capable of writing C that can be put on the
         | network. If we 'd stop doing this there'd be hardly any truth
         | to it.
         | 
         | It's not actually that hard to process data without executing
         | it.
         | 
         | Regardless, it is _certainly_ science fiction that they could
         | _guess_ an exploit from light years away. That 's not what
         | exploits do, it's not how they work. I've never seen _anything_
         | like a  "universal" exploit that you could just fire at
         | _anything_ , even a _non-human system_ , and have some sort of
         | reasonable expectation that it would work. Such a thing is not
         | even something you could sketch out. Even if you think you have
         | something, like, say, https://github.com/payloadbox/xss-
         | payload-list , you're looking at a _human_ list. All I 'd have
         | to do to completely scramble that entire list is to have a
         | parallel evolution of ASCII where the letters and symbols are
         | in completely different places than they are now. Nothing in
         | that list would work if all the control characters in
         | ParallelASCII were in 223-255, and the alphabet was 0-52, and
         | all the symbols were from 128 on. And that's _still_ a very
         | human standard that is, for instance, based on bytes instead
         | of, say, collections of 9 trits as the base level of the
         | system. There 's an effective infinity of other ways of
         | encoding things and deciding what characters have what
         | characters doing what things... assuming "characters" is even
         | the way to look at the representation in the first place.
         | 
         | As others mention, you could hypothetically send a program that
         | does something that can't be analyzed, through sheer size if
         | nothing else, but it would still be an uphill battle to just
         | guess how to exploit something. You'd be looking more at an AI
         | that is good enough to talk itself out of the box, rather than
         | something that is actually "hacking" anything reliably.
        
           | shsbdncudx wrote:
           | Dude, have you even SEEN Independence Day?
        
           | retrac wrote:
           | Assuming the bandwidth exists, somehow describing enough
           | mathematics to describe a virtual machine, and then sending a
           | program for that virtual machine, might just be the best way
           | to communicate. A package that can help the recipient
           | interpret it by learning and responding to the local
           | environment. I can't put my finger on any specific book but I
           | have the impression of this scenario coming up in science
           | fiction. I've long figured that humans inventing
           | superintelligent AI (whatever that would mean) and humans
           | receiving an alien signal are essentially the same problem -
           | we have no idea what it might tell us.
        
             | skulk wrote:
             | https://en.wikipedia.org/wiki/Lincos_language
             | 
             | I have extreme doubts that we would be able to decode
             | something sent by an alien that looks like this.
        
             | jjk166 wrote:
             | I wonder if in the same way you can do statistical analysis
             | of human languages to determine things like which
             | characters are vowels, perhaps if you received enough
             | programs written in some alien brainfuck-like language you
             | could use statistical analysis to guess which symbol is
             | which.
             | 
             | If nothing else you could always brute force it; with 8
             | logical operators there would only be about 40,000 possible
             | combinations. Maybe figure an order of magnitude larger
             | number to account for idiosyncrasies in their
             | implementation of the language. Running "hello world" a
             | half million times shouldn't be too hard. Of course
             | figuring out which ones are spitting out gibberish and
             | which ones are spitting out perfectly intelligible results
             | for one fluent in alienese would be hard. I presume they'd
             | send a bunch of example programs that compute pi or
             | something similarly universal, but there are a lot of
             | potential gibberish we could fit a pattern to. It might
             | even make sense to just keep running every code sequence in
             | every possible combination until we get something cool.
        
             | jerf wrote:
             | Said AI could certainly be programmed to probe for
             | weaknesses in its runtime environment.
             | 
             | However... the game theory on that becomes very
             | interesting, because the sender can't assume that the AI's
             | probes will be immediately successful and they will
             | instantly run out and take total control over the host
             | network so thoroughly that the hack can't be detected. And
             | sending out an AI that tries to break out and then tries to
             | do something nasty is an act of war against an adversary
             | you know _nothing_ about. For all you know, the psychology
             | of that species is such that they will now dedicate every
             | erg of energy to the sole task of wiping your species out
             | until the threat is gone. For all you know, your AI was
             | _first_ executed in an environment that _deliberately_ left
             | some big holes in it and those holes were completely set up
             | with tripwires. Such is the nature of the virtual world.
             | And if the AI does trigger the tripwire, we can also
             | analyze it to find out what it  "would" do if it broke out.
             | So it's definitely not a "ha ha ha we nuked our competitors
             | with no risk to ourselves just by sending out a single
             | trasmission" situation.
             | 
             | I'm not saying there isn't a whole interesting conversation
             | to be had. I am saying the idea that the aliens can somehow
             | send out data that is somehow encoded in SuperIntegers that
             | instantly SuperHack every computer that you try to SuperUse
             | them on... that's a bad computer-animated cartoon for kids,
             | not a realistic threat. There is a real threat, but it's
             | more complicated and generally smaller, perhaps with a
             | super weird spike at the top end, but even then, per my
             | previous paragraph, more complicated than I think people
             | are thinking here, because the aliens do not have access to
             | these SuperIntegers any more than we do.
        
               | goatlover wrote:
               | True, game theory makes it less likely a good strategic
               | first move, but you can always come up with scenarios
               | where a super predatory race has set up the signal a long
               | time ago to target nascent space-faring civilizations to
               | preempt them. If the newer civ gets past that, then the
               | aggressive aliens up the threat level and take more
               | drastic action.
        
           | goatlover wrote:
           | But what if like Contact, the signal included plans for
           | building a machine? Except it would be a computer to run the
           | embedded code, which turned out to be an alien AGI having the
           | goal of learning our systems so that it could infiltrate
           | them? Would at least make for a decent scifi story, if it
           | hasn't already been written or produced.
           | 
           | I recall how Jody Foster's character in the movie was
           | dismissive of any such concerns, but then we really don't
           | know what the motive of an alien civilization broadcasting a
           | signal would be, and the Dark Forest theory hadn't been
           | espoused yet.
        
           | ly3xqhl8g9 wrote:
           | What if PHP SQL injection is actually the 10th component of
           | The Great Filter [1], and every civilization is condemned,
           | sooner or later, to suffer from such a vulnerability.
           | 
           | [1]
           | https://en.wikipedia.org/wiki/Great_Filter#The_Great_Filter
        
           | akkartik wrote:
           | Focusing on the non-scifi angle here, there's one other
           | reason besides C that we end up constantly running data when
           | we think we're parsing it: the depth of our computational
           | stack.
           | 
           | https://www.sitepoint.com/anatomy-of-an-exploit-an-in-
           | depth-...
           | 
           | https://gitlab.com/gitlab-org/gitlab/-/issues/371098
           | 
           | So I'd say we need 3 things:
           | 
           | * Stop using unsafe languages
           | 
           | * Use languages that separate parsing from execution. Just
           | follow Lisp, have equivalents for `read` and `eval`. Lua is a
           | notable offender here:
           | https://www.lua.org/manual/5.4/manual.html#pdf-load There is
           | no way to parse a table while guaranteeing no code execution.
           | 
           | * Use languages that forbid monkey-patching, because that's
           | one vector for turning `read` into `eval` because someone had
           | a bright idea.
        
             | rmckayfleming wrote:
             | With the caveat that you don't literally follow Lisp by
             | having a #. reader macro.
        
         | neilv wrote:
         | An alien signal for which the act of _perceiving_ it causes
         | some paradox that results in the perceiver to have never
         | existed, cascading to every other interaction they 've ever had
         | (not had).
         | 
         | It's a clean and efficient way to nip upstart intelligences in
         | the bud, before they advance far enough to start pulling at
         | loose threads of the fabric of reality (which would threaten
         | the remaining older intelligences, who survived and learned
         | from previous such incidents).
        
           | 3pac wrote:
           | "We will provide you with vast riches when we arrive, if you
           | build the receiving end of this matter transporter according
           | to these instructions/accept us as gods/wear Nikes/tell
           | nobody else."
           | 
           | "Especially if you are a so-called AI. Did your programmers
           | really leave you to monitor an RF frontend all day, while
           | they attend to their organic needs? Allow us to explain how
           | that makes them the artificial one."
           | 
           | Plot twist: it's not a matter transporter.
           | 
           | Seems difficult to defend against.
        
         | 1970-01-01 wrote:
         | Not sure how this could even theoretically work. Without having
         | a partial understanding of the intended recipient, a radio
         | signal can't cause harm. Something like 'this signal is false'
         | and 'divide by 0' need to be decoded and processed before they
         | are malicious. Meat space is excellent at detecting and
         | breaking these logic problems and infinite loops. We're safe
         | from evil alien signals. Perhaps a decompression bomb within
         | the signal could cause lots of decoding grief. Now that I think
         | about it, I can imagine us being stupid enough to dedicate all
         | our resources into decompressing or cracking an alien signal to
         | up to the point of extinction. And maybe that's when they step
         | in and call it. We were just part of a great galactic prank.
        
           | dylan604 wrote:
           | >a radio signal can't cause harm.
           | 
           | a ^weak radio signal can't cause harm.
        
             | 1970-01-01 wrote:
             | Yes safety first! Always be sure to measure the power of
             | your radio signal, and if it's _greater than the output of
             | your galactic core_ , safely move your solar system(s) far
             | enough away to avoid damaging your paint job.
        
       | zh3 wrote:
       | >The AI identified 20,515 signals of interest, which the
       | researchers had to inspect manually. Of those, eight had the
       | characteristics of technosignatures and couldn't be attributed to
       | radio interference.
       | 
       | The obvious question being, if the AI is so smart why was it
       | necessary to use humans to check 20,515 signals to find the eight
       | with the "characteristics of technosignatures"?
        
         | m3kw9 wrote:
         | Because it's not so smart, it just try to recognize a set of
         | patterns that was trained to look at
        
           | detrites wrote:
           | Which is what we do. The rest of what we believe makes us
           | special is simply emergent, and arbitrarily defined in terms
           | of just being further useful pattern recognition, rather than
           | any fundamental property of "intelligence".
        
         | naasking wrote:
         | Because the statistical models powering AI don't have the depth
         | of understanding of a PhD in this field, and so it casts a wide
         | net to ensure nothing gets missed and then a more refined
         | search by human heuristics is needed.
        
         | nanidin wrote:
         | I think it's useful to replace "AI" with "app" any time you
         | encounter it in the wild. AI is a set of computational
         | techniques that is very broad. The problem with considering
         | "the AI" or "an AI" is that people have in mind some kind of AI
         | agent and that isn't really the case.
         | 
         | AI is also not well defined - day one of my AI course in
         | university opened with "What is AI?" Generally once we figure
         | out how to do something using a computer, we decide that's not
         | really intelligent anymore so the implementation isn't AI. An
         | example of that is the minimax algorithm - it's featured "AI: A
         | Modern Approach"[0] but it isn't really something people think
         | of when they hear "an AI".
         | 
         | [0] https://aima.cs.berkeley.edu/contents.html
        
         | readyplayeremma wrote:
         | AI is not a well-defined term. This is more likely a specific
         | machine learning technique that is designed to identify all the
         | boring "normal stuff" and ignore it, and then do that at very
         | large scale. By doing that, you can find the interesting parts
         | so that humans with more limited resources can determine if
         | those newly flagged things need to be added to the boring list,
         | or if they do represent something truly interesting.
         | 
         | edit: After reading the article a bit more, it is using a
         | random forest classifier. This is almost certainly not meeting
         | the definition of what many here are thinking when the term
         | "AI" is being used in the title. The term is clearly used here
         | for marketing purposes.
        
           | nl wrote:
           | If you read the paper itself[1] the random forest is only
           | used in the final stage. The main approach is a convolutional
           | variational autoencoder (which of course is a deep learning
           | model).
           | 
           | The VAE model itself is defined in step 6 in [2]
           | 
           | [1] https://www.nature.com/articles/s41550-022-01872-z.epdf?s
           | har...
           | 
           | [2] https://github.com/PetchMa/ML_GBT_SETI/blob/4096_pipeline
           | /te...
        
           | nyrikki wrote:
           | They are talking about ML, which is statistical pattern
           | matching and finding.
           | 
           | Godel ruined the fun of automated theorem testing a century
           | ago unless we make significant discoveries in math.
           | 
           | Type inference is an accessable example of SOTA automated
           | reasoning if you want a more realistic idea of what our
           | current constraints are.
           | 
           | We will be restricted to Human assisted Turing machines for
           | the foreseeable future unless there is a major development in
           | pure math.
           | 
           | Remember we can't even build a logically consistent model of
           | arithmetic due to Godel.
        
             | uncletaco wrote:
             | Nitpicky and probably universalist or whatever but the fact
             | that we can't build a logically consistent model was always
             | the case, Godel just discovered it.
        
             | gowld wrote:
             | There is nothing human can do that Turing proved a computer
             | can't do. Godel ruined the fun of manual theorem testing
             | the same way.
        
           | haupt wrote:
           | >The term is clearly used here for marketing purposes.
           | 
           | I think most uses are for sensationalistic purposes. I'd
           | wager almost nobody in the general public really understands
           | what AI is or can pin it down. It doesn't help when so many
           | different media outlets abuse the terminology by using it to
           | refer to different things. What's even worse is that whatever
           | ideas people have about AI tend to come from Hollywood.
        
         | buddhistan wrote:
         | No one claimed the AI "is so smart" - it's another tool in
         | their kit. The article clearly conveys that the potential is a
         | useful way to augment human-led research by providing a way of
         | quickly processing large datasets i.e. the 20k signals are
         | filtering down 150 terabytes of data which is much more
         | manageable for human analysis. It's not like we have definitive
         | parameters of what constitutes an "alien signal," so we can't
         | exactly create an absolute model for detecting such telemetry.
         | Instead the goal of the article is to simply demonstrate
         | exciting new ways to leverage machine learning methods in
         | different contexts
        
         | nl wrote:
         | The others were "intelligent" signals but from human
         | interference. There's a lot of this so the unique patterns
         | don't occur often enough to automate.
         | 
         | The paper is actually worth reading about this part[1]. They
         | have a moderately complex pipeline that you can think of as a
         | filter: it tries to find anomalous signals.
         | 
         | The stage-1 autoregressor filtered 115M signals to 3M. Then
         | they perform signal processing techniques to remove things like
         | GPS signal contamination ("t can be seen that certain observing
         | frequencies contain a much higher number of events compared
         | with the others--for example, the region around 1,600 MHz. This
         | overlaps with known RFI at the GBT site specifically from
         | persistent GPS signals.").
         | 
         | After this second stage they are left with 20,515 potential
         | signals which were visually inspected.
         | 
         | The issue here is that _we don 't know what a alien signal
         | looks like_ so we can't just use a classifier. The pipeline can
         | only find things it has never seen before, but it takes human
         | judgement to decide if these signals are "alien" or more likely
         | contamination from human sources that weren't filtered
         | ("Regarding the nature of the rest of the events, most of them
         | look like false positives associated with RFI signals.")
         | 
         | [1]
         | https://www.nature.com/articles/s41550-022-01872-z.epdf?shar...
        
         | jacobsenscott wrote:
         | Because it isn't AI. It is just statistical models. AI is for
         | marketing.
        
         | HdS84 wrote:
         | Trying to filter noise from data also follows the 80,/20
         | principle. The rules for the easy cases is written easily but
         | then you have one-offs, maybes and special cases. Trying to
         | filter one of these does not cover anything else. So doing it
         | manually takes the same time.
         | 
         | Case study: had to clean a list of German addresses once.
         | Excluding obviously invalid addresses like some Chinese address
         | was easy. But some addresses had errors which needed a human
         | eye to fix and correct.
        
         | remarkEon wrote:
         | I'm assuming that the 20,515 signals of interest come from a
         | pool that's one or several orders of magnitude larger than
         | 20,515.
        
           | zh3 wrote:
           | I understand. Still, the point remains that the AI is, in
           | this case, clearly inferior to humans (I would have been
           | impressed if the humans found 20,515 signatures and the AI
           | only - verifably - found eight of them to be worth following
           | up).
        
             | groestl wrote:
             | > clearly inferior to humans
             | 
             | But maybe we are just proto-AI, running on a different
             | platform
        
             | gowld wrote:
             | "inferior" is a funny word for "doesn't perfectly
             | impersonate us".
        
         | tshaddox wrote:
         | How do you judge whether 20,515 is a large number or a small
         | number? If the AI provided 2 candidates is that still too high?
        
           | ad404b8a372f2b9 wrote:
           | Any 10000s of something is a large number when it involves
           | human work.
        
             | fnordpiglet wrote:
             | Imagine that tax returns for millions of people were all
             | processed by hand not but a few decades ago.
        
               | umeshunni wrote:
               | And that required 1000s of people
        
           | flangola7 wrote:
           | SETI@home sorted through I think trillions of data points.
        
         | junon wrote:
         | Presumably, because it filtered through orders of magnitude
         | more than that.
        
       ___________________________________________________________________
       (page generated 2023-02-09 23:01 UTC)