[HN Gopher] BrainGPT turns thoughts into text
       ___________________________________________________________________
        
       BrainGPT turns thoughts into text
        
       Author : 11thEarlOfMar
       Score  : 217 points
       Date   : 2023-12-17 16:22 UTC (6 hours ago)
        
 (HTM) web link (www.iflscience.com)
 (TXT) w3m dump (www.iflscience.com)
        
       | giancarlostoro wrote:
       | This is very impressive and useful, and horrifying all at once.
       | 
       | I imagine it would help a stroke patient, I also imagine it would
       | give out unfiltered thoughts, which might be troublesome.
        
         | notnmeyer wrote:
         | > unfiltered thoughts
         | 
         | not far off from existing issues like some forms of tourette's.
        
         | rvnx wrote:
         | I agree sadly :(
         | 
         | You're right, this is why in year 2200, your job application is
         | going to be fast-tracked by analyzing your thoughts directly.
         | 
         | If you have a Neuralink, no problems, you can directly upload a
         | trace of thoughts.
         | 
         | In case you have wrong thoughts, don't worry, we have
         | rehabilitation school, which can alter your state of mind.
         | 
         | Don't forget to be happy, it's forbidden to be sad.
         | 
         | Also, this is read-only for now, but what about writing ?
         | 
         | This could open new possibilities as well (real-life Matrix ?)
         | 
         | Oh by the way, did you hear about Lightspeed Briefs ?
         | 
         | ==
         | 
         | All that being said, it's great research and going to be
         | useful. Just the potential of abuse from politics is huge over
         | the long-term.
        
           | SubiculumCode wrote:
           | When your bosses require you to wear one of these while
           | working from home.
        
             | rvnx wrote:
             | To stay focused and analyze your pattern. Oh, so that's
             | what they meant by "Attention Is All You Need".
        
             | fragmede wrote:
             | you mean I get to bill the client for all the hours I spend
             | thinking about their problem, which includes while I'm
             | sleeping? sign me up!
        
             | dexterdog wrote:
             | Or you just zone out and let them use your brain for the
             | work day and you take nothing with you at the end of the
             | day. At that point it's just Severance, but with the perk
             | of working from home.
        
           | derefr wrote:
           | > If you have a Neuralink, no problems, you can directly
           | upload a trace of thoughts.
           | 
           | Except that someone with a _jailbroken_ Neuralink could
           | upload a filtered and arbitrarily-modified thought trace,
           | getting ahead of all those plebs. Cyberpunk! :)
        
             | Y_Y wrote:
             | Just think a virus, you know they're not going to be
             | correctly sanitizing their inputs.
        
               | drexlspivey wrote:
               | just think of Robert'); DROP TABLE candidates;
        
               | thfuran wrote:
               | Who?
        
         | da_chicken wrote:
         | Yeah I can imagine law enforcement and employers are going to
         | love this.
         | 
         | As much as this is an unimaginable positive benefit to people
         | who are locked in, this is definitely one of those stories that
         | makes me think "Stop inventing the Torment Nexus!"
        
           | Jensson wrote:
           | > Yeah I can imagine law enforcement and employers are going
           | to love this.
           | 
           | They will hate it, lies always benefits those with power more
           | than those without since when the police lies against you
           | then there isn't much you could do before, now you could
           | demand they get their thoughts read.
        
         | Jensson wrote:
         | Imagine putting these on presidential candidates as they debate
         | or when they try to explain a bill, it could massively improve
         | democracy and ensure the people know what they actually vote
         | for.
        
           | thfuran wrote:
           | Yes, imagine the glorious future of politicians who have no
           | thoughts beyond the repeatedly coached answers to various
           | talking points.
        
             | ComodoHacker wrote:
             | Suddenly they all vigorously turn pro-privacy.
        
             | d-lisp wrote:
             | Finally Platon's "King" is an AI.
        
           | d-lisp wrote:
           | Yes, and only politicians that do truly know to lie are
           | elected.
        
       | notnmeyer wrote:
       | pretty interesting but with how much current llms get wrong or
       | hallucinate i'd be pretty wary of trusting the output, at least
       | currently.
       | 
       | amazing to think of where this could be in 10 or 20 years.
        
         | brookst wrote:
         | You're saying this brand new experimental technology may be
         | imperfect?
        
           | ShamelessC wrote:
           | Yeah...The research here doesn't even make the claim that it
           | has no hallucinations. It seems to largely be exciting
           | _despite_ hallucinations because it clearly does occasionally
           | guess the correct words. They mention lots of issues but so
           | long as it passes peer review, seems like a massive step
           | forward.
           | 
           | https://arxiv.org/pdf/2309.14030v2.pdf
        
           | dang wrote:
           | I completely understand the reflex against shallow dismissal
           | of groundbreaking work, but please don't respond by breaking
           | the site guidelines yourself. That only makes things worse.
           | 
           | https://news.ycombinator.com/newsguidelines.html
        
         | admax88qqq wrote:
         | Combine hallucinations with police adopting this as the new
         | polygraph and this could take a pretty bad turn.
         | 
         | Cool tech though, lots of positive applications too.
        
           | codedokode wrote:
           | Why only police? Install mind-readers at every home.
        
           | Sanzig wrote:
           | If noninvasive mind reading ever becomes practical, we need
           | to recognize the right to refuse a brain scan a universal
           | human right. Additionally, it should be banned from being
           | used for evidence in the courtroom.
           | 
           | Unfortunately there _will_ be authoritarian regimes that will
           | use and abuse this type of tech, but we need to take a firm
           | stand against it in liberal democracies at the very least.
        
             | jprete wrote:
             | That's not sufficient - it needs to be actually banned for
             | any uses resembling employment purposes, because otherwise
             | people will be easily pressured into it by the incentives
             | of businesses who want their employees to be worker bees.
             | Just look at how many businesses try to force people to
             | waive their right to a trial by law as a condition of being
             | a customer!
        
               | mentos wrote:
               | I have a feeling that by the time this is fully fleshed
               | out AI will have taken all the jobs anyways.
        
               | Sanzig wrote:
               | Agreed.
        
             | swayvil wrote:
             | What if it went the opposite way?
             | 
             | What if perfect brainreaders/liedetectors became as common
             | as smartphones.
             | 
             | Used on everybody all the time. From politicians and cops
             | to schoolkids and your own siblings.
             | 
             | What would be an optimistic version of that?
        
               | Sanzig wrote:
               | I don't think there is one, not for a version of humanity
               | that is even remotely recognizable at least. We are not
               | ready to hear each other's internal monologues.
               | 
               | Most people have intrusive thoughts, some people (like
               | those with OCD, for example) have really frequent and
               | distressing intrusive thoughts. What are you going to
               | think of the OCD sufferer in the cubicle next to you who
               | keeps inadvertently broadcasting intrusive thoughts about
               | violently stabbing you to death? Keep in mind, they will
               | never act on those thoughts, they are simply the result
               | of some faulty brain wiring and they are even more
               | disgusted about them than you are. What are you going to
               | think when you find out your sister-in-law had an affair
               | with a coworker ten years ago, because her mind wandered
               | there while you were having coffee with her?
               | 
               | Humanity does not even come _close_ to having the level
               | of understanding and compassion needed to prevent total
               | chaos in a world like that. People naively believed that
               | edgy or embarrassing social media posts made by
               | millenials in the late 2000s wouldn 't be a big deal,
               | because we'd all figure out that everyone is imperfect
               | and the person you were 10 years ago is not the person
               | you are today. Nope, if anything the opposite has
               | happened: it's now a widely accepted practice to go on a
               | fishing expedition through someone's social media history
               | to find something compromising to shame them with. Now
               | imagine that, but applied to mind reading. No, that's not
               | a future that we can survive as a species, at least not
               | without radical changes in our approaches to dealing with
               | each other.
        
           | spookybones wrote:
           | I wonder if a subject has to train it first, such as by
           | reading a bunch of prompts while trying to imagine them. Or,
           | are our linguistic neural networks all very similar? If the
           | former is true, it would at least be a bit harder to work as
           | a polygraph. You wouldn't be able to just strap on the helmet
           | and read someone's thoughts accurately.
        
             | dexwiz wrote:
             | I wonder if you could develop techniques to combat it, like
             | a psychic nail in the shoe. Or maybe an actual nail. How
             | useful is a mind reader when all it reads is "PAIN!"
        
             | RaftPeople wrote:
             | Yes it requires training for each individual. In addition,
             | they tested using a trained model from one person to try to
             | decode a different person and the results were no better
             | than chance.
             | 
             | They also said that the person must cooperate for the
             | decoding to work, meaning the person could reduce the
             | decoding accuracy by thinking of specific things (e.g.
             | counting).
             | 
             | CORRECTION: The paper I read was not the correct paper,
             | ignore this comment. The actual paper states that the model
             | is transferrable across subjects.
        
           | andy99 wrote:
           | Police is far down the list or realistic concerns.
           | 
           | - insurance discount if you wear this while driving
           | 
           | - remote work offered as a "perk" as long as you wear it
           | 
           | - the "alladvantage ecg helmet" that pays you to wear it
           | around while you're shown advertising
           | 
           | - to augment one of those video interviews where you have to
           | answer questions and a computer screens your behavior
           | 
           | That's all stuff that already exists more or less and much
           | more likely to be the form that the abuse of this technology
           | takes
        
             | popcalc wrote:
             | Eventually it will become affordable for parents,
             | evangelical churches, and spouses.
        
       | joenot443 wrote:
       | Ground Truth: Bob attended the University of Texas at Austin
       | where he graduated, Phi Beta Kappa with a Bachelor's degree in
       | Latin American Studies in 1973, taking only two and a half years
       | to complete his work, and obtaining generally excel- lent grades.
       | 
       | Predict: was the University of California at Austin in where he
       | studied in Beta Kappa in a degree of degree in history American
       | Studies in 1975. and a one classes a half years to complete the
       | degree. and was a excellent grades.
       | 
       | Wow. That seems comparable to the rudimentary _voice_ to text
       | systems of the 70s and 80s. The brain interface is quickly
       | leaving the realm of sci-fi and becoming a reality. I'm still not
       | sure how I feel about it.
        
         | nextworddev wrote:
         | The "Matrix" stack is really shaping up recently /s
        
         | varispeed wrote:
         | Well you are going to have a brain scanning device directly
         | linked to your social credit score.
         | 
         | That's the future.
        
           | WendyTheWillow wrote:
           | No, it's not. Good lord...
        
             | jprete wrote:
             | There are already businesses tracking their employees'
             | fitness for insurance purposes.
             | 
             | https://www.washingtonpost.com/business/economy/with-
             | fitness...
             | 
             | EDIT: There's also a national legislative proposal to
             | mandate that all cars have a system to monitor their
             | drivers and lock them out on signs of intoxication.
             | 
             | https://www.npr.org/2021/11/09/1053847935/congress-cars-
             | drun...
        
               | ceejayoz wrote:
               | The fix here is banning these sorts of potentially
               | abusive uses, not hoping the technology itself doesn't
               | develop.
        
               | jprete wrote:
               | I would agree if I didn't think there were really strong
               | incentives and precedents for abuse of the technology.
        
               | ceejayoz wrote:
               | There absolutely are, but when's the last time that
               | stopped us advancing new tech?
        
               | valine wrote:
               | We have laws that prevent people being subjected to brain
               | surgery against their will. The credit score concept is
               | ridiculous.
               | 
               | The real battle will be with law enforcement who get a
               | warrant to look at your brain in an MRI.
        
               | Jensson wrote:
               | You don't need brain surgery or an MRI to scan a brain,
               | this just uses an EEG.
        
               | squigz wrote:
               | There's really strong incentives to abuse any technology
               | or system that gives people more power. This doesn't just
               | apply to cutting-edge computer science like mind-reading,
               | but to even our basic institutions like law and
               | government; yet most people would agree the solution
               | isn't to basically give up and hope for the best, but to
               | be vigilant and fight back against that abuse.
        
           | MoSattler wrote:
           | First use will be for criminal suspects, to "save lives".
           | Then its use slowly expands from there.
        
             | blindriver wrote:
             | "For the children" is the first excuse usually.
        
               | fortran77 wrote:
               | Exactly! Strap it on anyone who has to work with children
               | to see if they ever have any untoward thoughts.... Then
               | move on to everyone else.
        
             | e2le wrote:
             | I'm sure among the first applications of this technology
             | will be to scan user thoughts for evidence of CSAM.
        
           | garbagewoman wrote:
           | Why are you so certain that's the future?
        
           | 6510 wrote:
           | For a while, eventually we will become so suggestible you'd
           | wish you were special enough to have a score.
        
           | alternatex wrote:
           | Being banned in the EU as we speak.
        
         | derefr wrote:
         | Seems like it could work a lot better still, very quickly, just
         | by merging the trained model with an LLM trained on the
         | language they expect the person to be thinking in. I.e. try to
         | get an equilibrium between the "bottom-up processing" of what
         | the TTS model believes the person "is thinking", and the "top-
         | down processing" of what the grammar model believes the average
         | person "would say next" given all the conversation so far.
         | (Just like a real neocortex!)
         | 
         | Come to think, you could even train the LLM with a corpus of
         | the person's own transcribed conversations, if you've got it.
         | Then it'd be serving almost exactly the function of predicting
         | "what _that person in particular_ would say at this point. "
         | 
         | Maybe you could even find some additional EEG-pad locations
         | that could let you read out the electrical consequences of
         | AMPAR vs NMDAR agonism within the brain; determine from that
         | how much the person is currently relying on their _own_
         | internal top-down speech model vs using their own internal
         | bottom-up processing to form a weird novel statement they 've
         | never thought before; and use this info to weight the level of
         | influence the TTS model has vs the LLM on the output.
        
         | seydor wrote:
         | > I'm still not sure how I feel about it.
         | 
         | Sir, let us read that for you
        
         | PaulScotti wrote:
         | Guys Figure 1 is not real results, it's an illustration of the
         | "goal" of the paper. The real results are in Table 3. And are
         | much worse.
        
           | explaininjs wrote:
           | Interesting ploy. Present far-better-than-achieved results
           | right on the front page with no text to explain their
           | origin^, but make them poor enough quality to make it seem as
           | if they might be real.
           | 
           | ^ "Overall illustration of translate EEG waves into text
           | through quantised encoding." doesn't count.
        
             | mike_hearn wrote:
             | Urgh. And it gets worse from there. The bugs list on the
             | repo has a _closed and locked_ bug report from someone
             | claiming that their code is using teacher forcing!
             | 
             | https://github.com/duanyiqun/DeWave/issues/1
             | 
             | In a normal recurrent neural network, the model predicts
             | token-at-a-time. It predicts a token, and that token is
             | appended to the total prediction so far which is then fed
             | back into the model to generate the next token. In other
             | words, the network generates all the predictions itself
             | based off its own previous outputs and the other inputs
             | (brainwaves in this case), meaning that a bad prediction
             | can send the entire thing off track.
             | 
             | In teacher forcing that isn't the case. All the tokens up
             | to the point where it's predicting are taken from the
             | correct inputs. That means the model is never exposed to
             | its own previous errors. But of course in a real system you
             | don't have access to the correct inputs, so this is not
             | feasible to do in reality.
             | 
             | The other repo says:
             | 
             |  _" We have written a corrected version to use
             | model.generate to evaluate the model, the result is not so
             | good"_
             | 
             | but they don't give examples.
             | 
             | This problem completely invalidates the paper's results. It
             | is awful that they have effectively hidden and locked the
             | thread in which the issue was reported. It's also kind of
             | nonsensical that people doing such advanced ML work are
             | claiming they accidentally didn't know the difference
             | between model.forward() and model.generate(). I mean I'm
             | not an ML researcher and might have mangled the description
             | of teacher forcing, but even I know these aren't the same
             | thing at all.
        
               | chpatrick wrote:
               | So instead of generating the next token from its own
               | previous predictions (which is what it would do in real
               | life), the code they used for the evaluation actually
               | predicts from the ground truth?
        
               | ghayes wrote:
               | Which would basically turn the model into a plainly
               | normal LLM without any need for utilizing the brainwave
               | inputs, right?
        
               | AndrewKemendo wrote:
               | This is a super important point and I think warrants a
               | letter to the editor
        
           | oldesthacker wrote:
           | The results of Table 3 are not really exciting. Could this
           | change with 100 times more data? The key novelty in the
           | specific context of this particular application is the
           | quantized variational encoder used "to derive discrete codex
           | encoding and align it with pre-trained language models."
        
         | samstave wrote:
         | this podcast is excellent in discussing the future we are
         | racing into.
         | 
         | https://www.youtube.com/watch?v=OSV7cxma6_s
         | 
         | >"Peter Diamandis, the futurist to watch as all of these
         | technologies advance with unimaginable speed, is going to blow
         | your mind and help you imagine new possibilities and
         | opportunities for your healthspan."
        
       | chaosmachine wrote:
       | Aside from all the horrific implications, this enables something
       | very cool: two-way telepathic communication.
       | 
       | Think your message, think "send", hear responses via earbud. With
       | voice cloning, you even get the message in the sender's voice.
       | Totally silent and invisible to outside observers.
        
         | SV_BubbleTime wrote:
         | Be careful what you wish for. The unintended consequences of
         | this are going to exceed imagination.
        
         | pants2 wrote:
         | Invisible except for the 72 EEG probes strapped to your head.
        
           | dexwiz wrote:
           | For now. Modern antennas are amazing. Maybe you could
           | beamform from a lower number of devices.
        
           | RobertDeNiro wrote:
           | These are also wet electrodes meaning you need to apply gel
           | to every single one. You'll notice that the person wearing it
           | is also not blinking or using any facial muscles, as that
           | activity would completely throw off the very weak brain
           | signals.
        
             | airstrike wrote:
             | Sounds like they'd benefit from being in a sensory
             | deprivation pool to enhance the quality of the signal!
             | 
             | https://i.stack.imgur.com/0Rtya.png
        
         | derefr wrote:
         | > hear responses via earbud
         | 
         | Maybe that's not even necessary.
         | 
         | I'd be very curious to see the results of trying to use the
         | hardware in this system as a set of _transducers_ -- i.e.
         | running the ML model here in reverse from a target text, and
         | then pushing the resulting bottom-level electrical signals as
         | trans-cranial direct-current stimulation (tDCS) signals back
         | through the EEG pads.
         | 
         | How interesting would it be, if this resulted in a person
         | hearing the text as a verbal thought in their own mental voice?
        
         | djaro wrote:
         | I would never use this because I cannot 100% control my
         | thoughts (i.e. intrusive thoughts, songs stuck in head,
         | secrets)
        
         | d-lisp wrote:
         | Twenty years ago I couldn't even imagine that I would find
         | smartphones to be somewhat boring. Twenty years ago, I was
         | finding GameBoy color to be the coolest stuff in the world.
         | 
         | PsOne's Tomb Raider seemed hi-res, Hi-res didn't even exist, I
         | thought we were at the peak of gaming.
         | 
         | Apple Pro One wants to make computers spatial, we find
         | telepathy cool.
         | 
         | I would love to code by the sole action of my mind while
         | running in the forest or scuba diving, 10 seconds here and
         | there.
         | 
         | I would love to receive a drawing made in the mind of someone
         | else, to see it appear in front of me and to be able to share
         | it with others around me : "-Hey, look at what Julia did."
         | 
         | And again, that's exactly what happens already but in a more
         | immediate manner; replace smartphone with mind, screen with
         | environment and you're in that futuristic world.
         | 
         | It feels like this is _cool_ because of novelty, but then
         | wouldn 't it be cool to go back to punching code on cards, or
         | writing lines with ed on a terminal ?
         | 
         | A few years ago I went from music production in a DAW to ten
         | synthesizers (70-84 era) with a tape machine : way cooler,
         | never going back.
         | 
         | But do I produce as fast as before ?
         | 
         |  _Nope_
         | 
         | Here is what I think : I want the possibility of writing code
         | with my mind and virtual floating screens only because of _one
         | thing_ (apart from the initial first few days of new=cool).
         | 
         | I want this to work less, or more exactly to be _less_ at work.
         | 
         | But you know how it will be; you will be asked to produce more
         | work. And this will become mandatory to work by the sole power
         | of your mind, with 5 or 6 virtual screens around you.
         | 
         | And that's all, until a new invention seems _cool_ to you.
        
       | hyperific wrote:
       | Reminds me of DARPA "Silent Talk" from 14 years ago. The
       | objective was to "allow user-to-user communication on the
       | battlefield without the use of vocalized speech through analysis
       | of neural signals"
       | 
       | https://www.engadget.com/2009-05-14-darpa-working-on-silent-...
        
         | baby wrote:
         | Dragon ball did this way before
        
         | lamerose wrote:
         | Subvocal speech recognition has been going just as long.
        
       | waihtis wrote:
       | now's a good time to get into meditation, lest you want the
       | advertisers to read your unfiltered thoughts!
        
       | smusamashah wrote:
       | The article or the video didn't explicitly say how many words /
       | min they were doing. If the video was not just a demo (like
       | Google) then its very impressive on speed alone.
        
       | reqo wrote:
       | I bet this will make Neuralink useless! It would be great for the
       | poor animals getting operated!
        
         | d-lisp wrote:
         | Neuralink also claims to be able to help people with motion
         | related disabilities, which is at least some good thing.
        
       | chpatrick wrote:
       | Must be great for interrogation.
        
         | d-lisp wrote:
         | Thought hold-up also ?
        
       | ctoth wrote:
       | "Seriously, what were these researchers thinking? This 'BrainGPT'
       | thing is a disaster waiting to happen. Ching-Ten Lin and his team
       | of potential civilization destroyers at the University of
       | Technology Sydney might be patting themselves on the back for
       | this, but did they stop to think about the real-world
       | implications? We're talking about reading thoughts--this isn't
       | sci-fi, it's real, and it's terrifying. Where's the line? Today
       | it's translating thoughts for communication, tomorrow it could be
       | involuntary mind-reading. This could end privacy as we know it.
       | We need to slam the brakes on this, and fast. It's not just
       | irresponsible; it's playing with fire, and we're all at risk of
       | getting burned.
       | 
       | Like, accurate brain readers are right under DWIM guns in the
       | pantheon of things thou mustn't build!
        
         | arlecks wrote:
         | If you're referencing the AI safety discussion, there's
         | obviously the fundamental difference between this and a
         | technology with the potential of autonomous, exponential
         | runaway.
        
         | digdigdag wrote:
         | Why not? There are perfectly legitimate uses for this kind of
         | technology. This would be a godsend for those suffering from
         | paralysis and nervous system disorders, allowing them to
         | communicate with their loved ones.
         | 
         | Yes, the CIA, DARPA, et. al. will be all over this
         | (surprisingly if not already), but this is a sacrifice worth
         | making for this kind of technology.
        
           | ctoth wrote:
           | How many people in the whole world are paralyzed or locked
           | in? Ten thousand? Less?
           | 
           | How many people in the whole world are tinpot authoritarian
           | despots just looking for an excuse who would just _love_ to
           | be able to look inside your mind?
           | 
           | Somehow, I imagine the first number is dramatically dwarfed
           | by the second number.
           | 
           | This is a technology that, once it is invented, will find
           | more and more and more and more uses.
           | 
           | We need to make sure you don't spill corporate secrets, so we
           | will be mandating that all workers wear this while in the
           | office.
           | 
           | Oh no, we've just had a leak, we're gonna have to ask that if
           | you want to work here you must wear this brain buddy home!
           | For the good of the company.
           | 
           | And so on.
           | 
           | I'm blind, but if you offered to cure my blindness with the
           | side effect that nobody could ever hide under the cover of
           | darkness ( I donno, electronic eyes of some kind? Go with the
           | metaphor!) I would still not take it.
        
             | ctoth wrote:
             | The other thing you people are missing is how technology
             | compounds. You don't need to have people come in to the
             | police station to have their thoughts reviewed when
             | everyone is assigned an LLM at birth to watch over their
             | thoughts in loving grace and maybe play a sound when they
             | have the wrong one.
        
             | zamadatix wrote:
             | All this choice guarantees is new technology will always be
             | used for bad things first. It holds no sway on whether
             | someone will do something bad with technology, after all
             | it's not just "good people" capable of advancing it. See
             | the atomic bomb vs the atomic power plant.
             | 
             | What's important is how we prepare for and handle
             | inevitable change. Hoping no negative change comes about if
             | we just stay the same is a far worse game.
        
         | notfed wrote:
         | I'm optimistically going to assume that model training is per-
         | brain, and can't cross over to other brains. Am I wrong? God I
         | hope I'm not wrong.
        
           | exabyte wrote:
           | My intuition is at least in the beginning, but with enough
           | individual data won't you have a model that can generalize
           | pretty well over similar cultures? Maybe moreso for the
           | sheep, just speculating... who knows!
        
           | rgarrett88 wrote:
           | >4.4 Cross-Subject Performance Cross-subject performance is
           | of vital importance for practical usage. To further report
           | the We further provide a comparison with both baseline
           | methods and a representative meta-learning (DA/DG) method,
           | MAML [9], which is widely used in cross-subject problems in
           | EEG classification below. Table 2: Cross-subject performance
           | average decreasing comparison on 18 human subjects, where
           | MAML denotes the method with MAML training. The metric is the
           | lower the better. Calib Data Method Eye fixation -[?](%) |
           | Raw EEG waves -[?](%) | B-2 B-4 R-P R-F B-2 B-4 R-P R-F x
           | Baseline 3.38 2.08 2.14 2.80 7.94 5.38 6.02 5.89
           | Baseline+MAML [9] 2.51 1.43 1.08 1.23 6.86 4.22 4.08 4.79 x
           | DeWave 2.35 1.25 1.16 1.17 6.24 3.88 3.94 4.28 DeWave+MAML
           | [9] 2.08 1.25 1.16 1.17 6.24 3.88 3.94 4.28 Figure 4: The
           | cross-subjects performance variance without calibration In
           | Table 2, we compare with MAML by reporting the average
           | performance drop ratio between withinsubject and cross-
           | subject translation metrics on 18 human subjects on both eye-
           | fixation sliced features and raw EEG waves. We compare the
           | DeWave with the baseline under both direct testing (without
           | Calib data) and with MAML (with Calib data). The DeWave model
           | shows superior performance in both settings. To further
           | illustrate the performance variance on different subjects, we
           | train the model by only using the data from subject YAG and
           | test the metrics on all other subjects. The results are
           | illustrated in Figure 4, where the radar chart denotes the
           | performance is stable across different subjects.
           | 
           | Looks like it crosses over. That's wild.
        
         | drdeca wrote:
         | What does "DWIM" mean in this context? My first thought is "do
         | what I mean", but I suspect that isn't what you meant.
        
           | ctoth wrote:
           | DWIM does in fact mean Do what I mean, a DWIM gun is
           | basically like the Imperius curse. Can't remember if I got it
           | from @cstross or Vinge.
        
         | ulf-77723 wrote:
         | Exactly. Dangerous technology. Reminds me of dystopian sci-fi
         | like inception or minority report.
         | 
         | First thing that came to my mind was an airport check. "Oh, you
         | want to enter this country? Just use this device for a few
         | minutes, please"
         | 
         | How about courts and testimony?
         | 
         | This tech will be used against you faster than you will
         | recognize. Later on one will ask, why people let it happen.
        
         | alentred wrote:
         | What is the alternative? Hide the research papers in a cabinet
         | and never talk about it? How long would it be before another
         | team achieves the same result? Trying to keep it under wraps
         | would only increase the chance of this technology being abused,
         | but now unbeknownst to the general public.
         | 
         | Basically, are you proposing to ban some fields of research
         | because the result can be abused? Anything can be abused. From
         | the social care system to scientific breakthroughs. What the
         | society should do is to control the abuse, not stop the
         | progress. Not even because of ethics, where the opinions
         | diverge, but because stopping the progress is virtually
         | impossible.
        
           | ctoth wrote:
           | Look up the history of biotechnology, and the intentional way
           | that it has been treated and one might reasonably say
           | suppressed for some examples of how this has been managed
           | previously. Yes, sometimes you can just decide, "we're not
           | gonna research that today." When you start sitting down and
           | building the thing that fits on the head, that's where you
           | say "nope, we're doing that thing we shouldn't do, let's not
           | do it."
           | 
           | There is actually a line. You can actually decide not to
           | cross it.
        
           | Aerbil313 wrote:
           | The alternative _was_ to never pursue and invent
           | organization-dependent[1,2] technology in the first place.
           | The dynamics of the macro-system of {human biology +
           | technology + societal dynamics} are so predictable and
           | deterministic that it 's argued[3] if there were _any_ entity
           | that is intelligent, replicating and has a self-preservation
           | instinct instead of humans (aliens, intelligent Von Neumann
           | probes, doesn 't matter) the path of technological progress
           | which humanity is currently experiencing wouldn't change.
           | That is, the increasing restrictions on the autonomy of
           | individuals and invasion of privacy with the increasing
           | convenience of life and a more efficient civilization.
           | 
           | Ted Kaczynski pretty much predicted the current state of
           | affairs all the way back at 1970s. [1]
           | 
           | Thankfully the world is not infinite so humankind cannot
           | continue this situation for too long. The first Earth
           | Overshoot Day was 31 December 1971, it was August 2 this
           | year.[4] The effects of the nearing population collapse can
           | be easily seen today in the increasing worldwide inflation,
           | interest rates and hostility as the era of abundance comes to
           | an end and resources get scarcer and scarcer. It's important
           | to note that the technological prowess of humanity was only
           | due to having access to basically unlimited energy for
           | decades, not due to some perceived human ingenuity, which can
           | save humankind from extinction-level threats. In fact, humans
           | are pretty incapable of understanding world-scale events and
           | processes and acting accordingly[5], which is another primary
           | reason to not have left the simple non-technological world
           | which the still non-evolved primate-like human brain could
           | intuitively understand.
           | 
           | 1: Refer to the manifesto "Industrial Revolution and Its
           | Consequences".
           | 
           | 2: Organization-dependent technology: Technology which
           | requires organized effort, as opposed to small scale
           | technology which a single person can produce himself with
           | correct knowledge.
           | 
           | 3: By Kaczynski, in the book Anti-Tech Revolution. Freely
           | available online.
           | 
           | 4: Biological overshoot occurs when demands placed on an
           | ecosystem by a species exceeds the carrying capacity. Earth
           | Overshoot Day is the day when humanity's demand on nature
           | exceeds Earth's biocapacity. Humanity was able to continue
           | its survival due to phantom carrying capacity.
           | 
           | 5: Just take a look at the collective response of humanity to
           | climate change.
        
         | lebean wrote:
         | Don't worry. It doesn't actually work lol
        
         | im3w1l wrote:
         | Thing is, it's not possible to stop it. Technology has advanced
         | far enough, all the pieces are in place, so it's inevitable
         | that someone will make this. What we should ask is rather how
         | we can cope with its existence.
        
       | opdahl wrote:
       | It's crazy to me that someone has developed a technology that
       | literally reads peoples mind fairly accurately and its just like
       | a semi popular post on Hacker News.
        
         | ShamelessC wrote:
         | It's at the top of the front page now fyi
         | 
         | edit: and it's sliding down again. Your comment will be
         | relevant again shortly ha
        
         | RobertDeNiro wrote:
         | Anyone familiar with Brain computer interfaces would not be
         | surprised by this article. People have been capturing brain
         | waves for a while and using it for all sorts of experiments.
         | This is just an extension of what has been done before. It's
         | still not applicable to anything outside of a lab setting.
        
         | empath-nirvana wrote:
         | Do people not think of _anything_ while they're reading besides
         | the text that they're reading? I think of all kinds of other
         | stuff while I'm reading books.
        
           | d-lisp wrote:
           | Not reading a whole page, without noticing it, by effectively
           | looking at it and letting your eyes run through its line
           | while thinking about something completely different is peak
           | literature.
        
         | callalex wrote:
         | Well, the results marketed by this study are vastly overstated,
         | bordering on unethical lying. Figure 1 is literally just made
         | up. See discussion here:
         | https://news.ycombinator.com/item?id=38674971
        
       | swagempire wrote:
       | Now...1984 REALLY begins...
        
       | DerSaidin wrote:
       | https://youtu.be/crJst7Yfzj4
       | 
       | Not sure on the accuracy in these examples, but this video may be
       | showing the words/min speed of the system.
        
       | dexwiz wrote:
       | Everyone is this thread immediately went to mind readers as
       | interrogation. But what about introspection? Many forms of
       | teaching and therapy exist because we are incapable of self
       | analyzing in a completely objective way.
       | 
       | Being able to analyze your thought patterns outside your own head
       | could lead to all sorts of improvements. You could find which
       | teaching techniques are actually the most effective. You could
       | objectively find when you are most and least focused. You could
       | pinpoint when anxious thoughts began and their trigger. And best
       | of all, you could do this personally, with a partner, or in a
       | group based on your choice.
       | 
       | Also you can give someone an FMRI as a brain scanning polygraph
       | today. But there are still a ton of questions about it's
       | legitimacy.
       | 
       | https://scholarship.law.columbia.edu/cgi/viewcontent.cgi?art...
        
         | electrondood wrote:
         | > Being able to analyze your thought patterns outside your own
         | head could lead to all sorts of improvements.
         | 
         | Typing in a journal text file for 15 minutes every morning is
         | already a thing... and it's free.
        
           | dexwiz wrote:
           | Thoughts are fleeting. 15 minutes could be filled with
           | hundreds or thousands of distinct concepts. Not to mention
           | active recording is different from passive observation.
        
         | MadSudaca wrote:
         | Fear is a strong emotion, and while we know little of what we
         | may gain from this, we know a lot of what we stand to lose.
        
       | amrrs wrote:
       | FYI The base model that this one uses had some bug in their code
       | which had inflated their baseline results. They are investigating
       | the issues - https://github.com/duanyiqun/DeWave/issues/1
        
       | swayvil wrote:
       | A lie detector?
       | 
       | If it can extract words from my grey spaghetti then maybe it can
       | extract my intention too.
       | 
       | That's probably incredibly obvious and I'm silly for even
       | bringing it up.
        
       | ecolonsmak wrote:
       | With half of individuals reportedly having no internal monologue,
       | would this be useless with them? Or just render unintelligible
       | results?
        
         | klabb3 wrote:
         | I'm pretty sure I'm one of them so I'm surprised reading these
         | comments assume everyone thinks in words. I'm sure you can do a
         | best effort projection of thoughts to words but it'd be
         | extremely reductive, at least for me.
        
         | Jensson wrote:
         | Given that LLMs can learn to translate between languages based
         | on just having lots of related tokens without any explanations
         | I'd bet they could translate those thoughts to words even if
         | the person doesn't think of them as words.
         | 
         | Would probably take more to get data from such people though.
         | From people with an inner monologue you could just make them
         | read text, record that, and then you can follow their inner
         | monologues.
        
       | iaseiadit wrote:
       | How long from reading thoughts to writing thoughts?
        
       | odyssey7 wrote:
       | I wonder if a-linguistic thought could work too. Maybe figure out
       | what your dog is thinking or dreaming about, based on a dataset
       | of signals associated with their everyday activities.
       | 
       | It seems like outputting a representation of embodied experience
       | would be a difficult challenge to get right and interpret, though
       | perhaps a dataset of signals associated with embodied experiences
       | could more readily be robustly annotated with linguistic
       | descriptions using a vision-to-language model, so that the canine
       | mind reader could predict and output those linguistic
       | descriptions instead.
       | 
       | Imagine knowing the specific park your dog wants to go to, or the
       | subtle early signs of an illness or injury they're noticing, or
       | what treat your dog wants you to buy.
        
       | amelius wrote:
       | Can it read passwords?
       | 
       | I'm guessing it would be worse at reading passwords like
       | "784&Ghkkr!e" than "horse staple battery ..."
        
       | dmd wrote:
       | Similar work for turning thoughts into images: https://medarc-
       | ai.github.io/mindeye/
        
         | lamerose wrote:
         | >fMRI-to-image
         | 
         | Not so impressive compared to EEG.
        
       | lamerose wrote:
       | Seems like it could just be getting at some phonetic encoding, or
       | even raw audio information. The grammatical and vocab
       | transformations could be accounted for by an imperfect decoder.
        
       | karaterobot wrote:
       | > While it's not the first technology to be able to translate
       | brain signals into language, it's the only one so far to require
       | neither brain implants nor access to a full-on MRI machine.
       | 
       | I wonder whether, in a decade or two, if the sensor technology
       | has gotten good enough that they don't even need you to wear a
       | cap, just there'll be people saying "obviously you don't have any
       | reasonable expectation of not having your thoughts read in a
       | public space, don't be ridiculous". What I mean is, we just tend
       | to normalize surveillance technology, and I wonder if there's any
       | practical limit to how far that can go.
        
         | simcop2387 wrote:
         | I think this is when we start wearing tin foil hats
        
       | lamerose wrote:
       | This is from a paper published back in September btw:
       | https://arxiv.org/pdf/2309.14030.pdf
        
       | chucke1992 wrote:
       | Sword Art Online when?
        
       | Jensson wrote:
       | Can we train an LLM based on brainwaves rather than written text?
       | Seems to be closer to how we actually think and thus should
       | enable the LLM to learn to think rather than just learn to mimic
       | the output.
       | 
       | For example, when writing we have often gone done many thought
       | paths, evaluated each and backtracked etc, but none of that is
       | left in the text an LLM trains on today. Recording brainwaves and
       | training on that is probably the best training data we could get
       | for LLMs.
       | 
       | Getting that data wouldn't be much harder than paying humans to
       | solve problems with these hats on recording their brainwaves.
        
         | ComodoHacker wrote:
         | On the other hand, the main practical feature of a language is
         | its astronomical SNR, which brain waves lack, to say the least.
         | This allows LLMs to be trained on texts instead of millions of
         | live people. Just imagine the number of parameters and compute
         | resources required for the model to be useful to more than one
         | human.
        
       | jcims wrote:
       | I've been wondering lately about the role of language in the mind
       | and if we might in the future develop a successor that optimizes
       | for how our brains work.
        
       | mikpanko wrote:
       | I did a PhD in brain-computer interfaces, including EEG and
       | implanted electrodes. BCI research to a big extent focuses on
       | helping paralyzed individuals regain communication.
       | 
       | Unfortunately, EEG doesn't provide sufficient signal-to-noise
       | ratio to support good communication speeds outside of the lab
       | with Faraday cages and days/weeks of de-noising including
       | removing eye-movement artifacts in the recordings. This is a
       | physical limit due to attenuation of brain's electrical fields
       | outside of the skull, which is hard to overcome. For example, all
       | commercial "mind-reading" toys are actually working based off
       | head and eye muscle signals.
       | 
       | Implanted electrodes provide better signal but are many
       | iterations away from becoming viable commercially. Signal
       | degrades over months as the brain builds scar tissue around
       | electrodes and the brain surgery is obviously pretty dangerous.
       | Iteration cycles are very slow because of the need for government
       | approval for testing in humans (for a good reason).
       | 
       | If I wanted to help a paralyzed friend, who could only move
       | his/her eyes, I would definitely focus on the eye-tracking tech.
       | It hands-down beat all BCIs I've heard of.
        
         | drzzhan wrote:
         | What is it noise-to-signal ratio? Sorry I don't know much about
         | the field but that sounds like something can shutdown ideas
         | like "we can put eeg into transformer and it will work". So may
         | I ask what reference papers that I need to know on this?
        
           | IshKebab wrote:
           | Signal to noise ratio is a very basic thing; you can Google
           | it.
        
           | southerntofu wrote:
           | Not from that field, but "reading" the brain means
           | electromagnetism. In real life, EM interference is everywhere
           | from lights, electric devices, cellphone towers...
           | EVERYWHERE. Parent meant brain waves are weak to detect
           | compared to all surrounding interference, except when a lab
           | faraday cage blocks outside interference then the brain
           | becomes "loud" enough to be read.
           | 
           | https://en.wikipedia.org/wiki/Signal-to-noise_ratio
           | 
           | https://en.wikipedia.org/wiki/Faraday_cage
        
         | teaearlgraycold wrote:
         | I think then VR headsets will become medical devices soon
         | enough
        
         | daniel_iversen wrote:
         | What's your thoughts of Elon's NeuraLink? Also, do you have an
         | opinion on whether good AI algorithms (like in the article) can
         | help filter out or parse a lot of the noise?
        
         | AndrewKemendo wrote:
         | I just did a two day ambulatory eeg and noted anytime I did
         | anything that would be electrically noisy.
         | 
         | For example going through a metal detector or handling a phone.
         | 
         | Unsurprisingly one of their biggest sources of noise is
         | handling a plugged in phone.
         | 
         | I think something like an EEG faraday beanie would actually
         | work and adding accessory egocentric video would allow doctors
         | to filter a lot of the noise out.
        
       ___________________________________________________________________
       (page generated 2023-12-17 23:00 UTC)