[HN Gopher] An AI lawyer was set to argue in court - real lawyer...
       ___________________________________________________________________
        
       An AI lawyer was set to argue in court - real lawyers shut it down
        
       Author : isaacfrond
       Score  : 411 points
       Date   : 2023-01-26 08:56 UTC (14 hours ago)
        
 (HTM) web link (www.npr.org)
 (TXT) w3m dump (www.npr.org)
        
       | residualmind wrote:
       | One of the very few things giving me joy lately: People being
       | afraid of AI, feeling threatened and insecure in their
       | capabilities, because maybe those aren't so great after all...
        
       | qwerty456127 wrote:
       | Every person should still have the right to be defended by a
       | human lawyer yet the right to voluntarily choose an AI lawyer to
       | either defend you or just hint you as you defend yourself would
       | be great to have. It may totally change the game where
       | (currently) whoever can afford expensiwe lawyers generally wins
       | and whoever can't automatically looses exorbitant sums of money.
       | Real lawyers will never let this happen.
        
       | sublinear wrote:
       | What a scam! AI should be used to help educate people if they
       | want to defend themselves in court (along with a few good books),
       | not replace professionals.
       | 
       | This is purely exploitative behavior from anyone offering such a
       | service and depressingly nihilistic behavior from anyone seeking
       | these kinds of services even if it's just fighting traffic
       | tickets for now.
       | 
       | I wish all professional services had similar watchdogs and
       | protections from unlicensed/unauthorized work!
       | 
       | We're at the point where leveraging technology is becoming
       | existential. Quite literally putting every aspect of life on
       | autopilot is not only absurd but a cancer.
       | 
       | If we're going to see the secular decline of certain professional
       | services it should be at the hands of well-educated humans, not
       | roll-of-the-dice AI. Would a well educated public not be a
       | massive net good for society instead of exploiting the poor? What
       | a small minded and backwards world we live in today.
        
       | markogrady wrote:
       | The HMCTS held a hackathon for the future tech in the UK court
       | system a few years ago. The judges were people like the CEO of
       | the courts, they also had lord chief justice. There were all
       | sorts of firms like Linklaters, Pinsent Mason and Deloitte. We
       | won with a simple Alexa lawyer that was to help poor rental
       | tenants. It generated documents to send a landlord and possible
       | legal advice. The idea was specifically for people who can not
       | afford a lawyer. There was a lot of influential people who were
       | very excited about this space, so it is strange when it actually
       | gets implemented it's not allowed.
       | 
       | I wonder what the wider implications are for the legal system.
       | Will there be less qualified human lawyers in the future due to
       | the lack of junior roles that are filled by AI? Will lawyers be
       | allowed to use AI to find different ways of looking at issues?
        
         | TeaDude wrote:
         | I'm interested. What sort of regulations do you think would
         | affect the robo-lawyer space in the UK?
         | 
         | Self-representation is frowned upon and mostly disallowed.
         | Lawyers are expensive. I'd genuinely consider having ChatGPT
         | fight for me.
        
           | danielfoster wrote:
           | Privacy seems like it would be a major issue. As a litigant,
           | I would not want the opposing side piping my case information
           | to a third party and having this information used to train
           | the AI for future cases.
           | 
           | AI could be very useful for helping pro-so litigants prepare
           | documents. I imagine with this use case as well as the oral
           | argument use case, judges are also worried about low quality
           | output wasting the court's time.
        
           | etothepii wrote:
           | Self-representation is frowned upon, "a person with
           | themselves for a client has a fool for a client." But, where
           | in the UK system is it disallowed, unless you are a repeated,
           | "freeman of the land" nonsense spouter?
        
         | NoboruWataya wrote:
         | Apart from being different jurisdictions, they are different
         | issues. The situation in the article involved a pro se litigant
         | feeding courtroom proceedings to the AI and regurgitating its
         | responses in real time. In that situation you are effectively
         | handing your agency over to the AI. You can't really be said to
         | be representing yourself in any real sense; you are mindlessly
         | parroting what is fed to you.
         | 
         | The situation you describe seems to be more akin to an advanced
         | search or information portal that people can use to guide their
         | self-representation, or even their decision to engage
         | lawyers/discussions with their lawyers (of course, maybe I'm
         | misunderstanding). That stuff has basically always been
         | allowed; nobody is threatening to prosecute Google because pro
         | se litigants use it in their research. There are plenty of
         | websites out there that discuss tenants' rights. There are even
         | template tenancy agreements available online for free.
         | 
         | Also, what were you proposing to use as the knowledge base for
         | your Alexa lawyer? Were you really planning on using ChatGPT or
         | some other general purpose AI? Or would the knowledge base be
         | carefully curated by qualified professionals? And who would
         | create and maintain it, the state? A regulated firm? Or a
         | startup with a name like "DoNotPay"?
        
           | brookst wrote:
           | Really good thoughts and treatment of the different issues.
           | The line between "tool" and "agent" is blurry and will
           | probably just keep getting blurrier. But I do think it's
           | important for our judicial system to ensure that any
           | delegation of representation is to a very qualified third
           | party, for both ethical and process/cost reasons.
           | 
           | I'm not sure the startup's name is especially germane though.
           | If anything, it seems to fit right in with human lawyers like
           | 1-800-BEAT-DUI.
        
         | dragonwriter wrote:
         | > Will there be less qualified human lawyers in the future due
         | to the lack of junior roles that are filled by AI?
         | 
         | I doubt it; AI will be a force multiplier from law school on
         | into practice. More value will mean more demand at all
         | experience levels.
        
       | lolinder wrote:
       | This company is either run by someone who doesn't understand the
       | tech or is willfully fraudulent. ChatGPT and company are far from
       | good enough to be entrusted with law. Having interacted
       | extensively with modern LLMs, I absolutely _know_ something like
       | this would happen:
       | 
       | > Defendant (as dictated by AI): The Supreme Court ruled in
       | Johnson v. Smith in 1978...
       | 
       | > Judge: There was no case Johnson v. Smith in 1978.
       | 
       | LLMs hallucinate, and there is absolutely no space for
       | hallucination in a court of law. The legal profession is perhaps
       | the closest one to computer programming, and _absolute_ precision
       | is required, not a just-barely-good-enough statistical machine.
        
         | jchw wrote:
         | Pretty sure the whole reason why DoNotPay actually exists is
         | because defending against parking tickets didn't actually
         | require a strong defense. The tickets were flawed automation,
         | and their formulaic nature justified and equally formulaic
         | response, or something to that effect. Whether the LLM was
         | actually going to output answers directly, or just be used to
         | drive a behavior tree or something like that, is a question I
         | don't see answered anywhere.
         | 
         | That said, if it's such a catastrophically stupid idea, I'm not
         | really sure why it had to be shot down so harshly: seems like
         | that problem would elegantly solve itself. I assume the real
         | reason it was shot down was out of fear that it _would_ work
         | well. Does anyone else have a better explanation for why there
         | was such a visceral response?
        
           | ufmace wrote:
           | Thinking about how the problem would "elegantly solve itself"
           | seems to illustrate the issue.
           | 
           | Someone using it in an actual courtroom would make a
           | boneheadedly dumb argument or refer to a nonexistent
           | precedent or something. Then maybe the judge gets upset and
           | gives them the harshest punishment or contempt of court or
           | they just lose the case. They may or may not ever get a
           | chance to fix it.
           | 
           | A failure mode of jail time and/or massive fines for your
           | customers doesn't sound all that elegant to me. This isn't a
           | thing to show people cat pictures, I don't think move fast
           | and break things is a good strategy.
           | 
           | Not to say that there aren't some entrenched possibly corrupt
           | and self-serving interests here. But that doesn't mean they
           | don't have a point.
        
             | thinking4real wrote:
             | "Then maybe the judge gets upset and gives them the
             | harshest punishment or contempt of court"
             | 
             | That sounds like a horrible judge, maybe this AI can be
             | used to sniff them out and get rid of them.
        
               | underbluewaters wrote:
               | Judges would be absolutely right to punish lawyers or
               | defendants that are bullshitting the court. They are
               | wasting time and resources that would otherwise go
               | towards cases where people are actually representing
               | themselves in good faith.
        
             | treis wrote:
             | It's probably better than the existing alternative. Which
             | is roughly plead guilty because you don't have money to pay
             | a lawyer. Or don't sue someone because you don't have money
             | to pay a lawyer.
        
           | abfan1127 wrote:
           | Protectionism. Its why they shout for regulation. Keep the
           | others out, force consumers to use your product, make lots of
           | money.
        
           | fatbird wrote:
           | It had to be shot down harshly because there are some
           | premises to a courtroom proceeding that aren't met by an AI
           | as we currently have.
           | 
           | One of those is that the lawyer arguing a case is properly
           | credentialed and has been admitted to the bar, and is a
           | professional subject to malpractice standards, who can be
           | held responsible for their performance. An AI spitting out
           | statistically likely responses can't be considered an actual
           | party to the proceedings in that sense.
           | 
           | If a lawyer cites a non-existent precedent, they can make
           | their apologies to the court or be sanctioned. If the AI
           | cites a non-existent precedent, there's literally no way to
           | incorporate that error back into the AI because there's no
           | factual underlying model against which to check the AI's
           | output--unless you had an actual lawyer checking it, in which
           | case, what's the point of the AI?
           | 
           | Someone standing in court, repeating what they hear through
           | an earpiece, is literally committing a fraud on the court by
           | presenting themselves as a credentialled attorney. The stunt
           | of "haha, it was really just chatGPT!" would have had severe
           | legal consequences for everyone involved. The harsh response
           | saved DoNotPay from itself.
        
             | john_the_writer wrote:
             | I don't understand the "cites a non-existent precedent"
             | bit. Presumably the AI would have a database of a pile of
             | precedent. It wouldn't make up cites. It would have
             | "knowledge" of so much precedence, it could likely find
             | something to win either side of the argument.
        
             | freedomben wrote:
             | > _If the AI cites a non-existent precedent, there 's
             | literally no way to incorporate that error back into the AI
             | because there's no factual underlying model against which
             | to check the AI's output--unless you had an actual lawyer
             | checking it, in which case, what's the point of the AI?_
             | 
             | IANAL, but I would bet the level of effort to fact check an
             | AI's output would be orders of magnitude lower than
             | researching and building all your own facts.
             | 
             | I used it to generate some ffmpeg commands. I had to verify
             | all the flags myself, but it was like 5 minutes of work
             | compared to probably hours it would have taken me to figure
             | them all out on my own.
        
               | gamblor956 wrote:
               | You would lose that bet.
               | 
               | Fact-checking nonsensical output would take a lot longer
               | than researching a single body of law, which you can
               | generally do by just looking up a recent case on the
               | matter. You don't need to check every cite; that will
               | have been done for you by the lawyers and judges involved
               | in that case.
               | 
               | But checking every cite in an AI's output: many of those
               | citations won't exist, and for the ones that do, you'll
               | need to closely read all of them to confirm that they say
               | what the AI claims they say, or are even within the
               | ballpark of what the AI claims they say.
        
               | freedomben wrote:
               | > _You don 't need to check every cite; that will have
               | been done for you by the lawyers and judges involved in
               | that case._
               | 
               | Why would this be different with an AI assistant to help
               | you? It's not a binary "do or do not". Just because you
               | have an assistant doesn't mean you don't do anything.
               | Kind of like driver assist can handle some of the load vs
               | full self-driving.
               | 
               | > _But checking every cite in an AI 's output: many of
               | those citations won't exist, and for the ones that do,
               | you'll need to closely read all of them to confirm that
               | they say what the AI claims they say, or are even within
               | the ballpark of what the AI claims they say._
               | 
               | But you'd have to do this _anyway_ if you did all the
               | research yourself. At least the AI assistant can help
               | give you some good leads so you don 't have to start from
               | scratch. A lazy lawyer could skip some verifying, but a
               | good lawyer would still benefit from an AI assistant as
               | was my original bet, just like they would benefit from
               | interns or paralegals, etc. And all those interns and
               | paralegals could still be there, helping verify facts.
        
               | gamblor956 wrote:
               | _But you 'd have to do this anyway if you did all the
               | research yourself. At least the AI assistant can help
               | give you some good leads so you don't have to start from
               | scratch. _
               | 
               | No, that's exactly the opposite of what I'm saying. If
               | you did the research yourself, you wouldn't need to
               | verify every cite once you find a relevant source/cite,
               | because previous lawyers would have already validated the
               | citations contained within that source. (A good lawyer
               | _should_ validate at least some of those cites, but
               | frequently that 's not necessary unless you're dealing
               | with big stakes.)
               | 
               | And the AI assistant, at least this one and the ones
               | based on ChatGPT, _don 't_ provide good leads. They
               | provide crap leads that not only don't exist, but
               | increase the amount of work.
        
               | bluGill wrote:
               | Fact checking an AI is still massively easier than
               | finding and reading all the precedent yourself. Real
               | lawyers of course already know the important precedent in
               | the areas they deal in, and they still have teams behind
               | the scene to search out more that might apply, and then
               | only read the ones the team says look important.
               | 
               | Of course there could be a difference between an reading
               | all the cases an AI says are important and actually
               | finding the important cases including ones the AI didn't
               | point you at. However this is not what the bet was about.
        
               | dragonwriter wrote:
               | > Fact checking an AI is still massively easier than
               | finding and reading all the precedent yourself.
               | 
               | Actually fact-checking an AI requires finding and reading
               | all the precedent yourself to verify that the AI has both
               | cited _accurately_ and not missed contradictory precedent
               | that is _more_ relevant (whether newer, from a higher
               | court, or more specifically on-point.)
               | 
               | If it has got an established track record, just as with a
               | human assistant, you can make an informed decision about
               | what corners you can afford to cut on that, but then you
               | aren't really fact-checking it.
               | 
               | OTOH, an AI properly trained on one of the existing
               | human-curated and annotated databases linking case law to
               | issues and tracking which cases apply, overrule, or
               | modify holdings from others might be _extremely_
               | impressive--but those are likely to be expensive products
               | tied to existing offerings from Westlaw, LexisNexis, etc.
        
               | john_the_writer wrote:
               | What do you mean "finding"? The AI would just return
               | links or raw text of the cases. Reading the findings
               | would be the same as reading any precedence. But the AI
               | could weight the results, and you'd only have to read the
               | high scoring results. If the AI got it wrong, you'd just
               | refine the search and the AI would be trained.
               | 
               | To the cost. If it removed the need for one legal
               | assistant or associate then anything less than the cost
               | of employing said person would be profit. So if it cost <
               | 50k a year you'd be saving. (cost of employing is more
               | than just salary)
        
               | dragonwriter wrote:
               | You can't validate that it is making the right citations
               | by only checking the cases it is citing, and the rankings
               | _it_ provides of those and other cases. You have to
               | validate the non-existence of other, particularly
               | contrary, cases it should be citing either additionally
               | or instead, which it may or may not have ranked as
               | relevant.
        
               | thombat wrote:
               | But when appearing in court you're in real-time: you
               | can't take 5 minutes to validate the AI output before
               | passing it on. You can do that for your opening
               | statements but once faced with the judge's rulings or
               | cross-examination you'll be in the weeds.
        
               | freedomben wrote:
               | Yeah that's fair, although if it was AI-assisted lawyer
               | then presumably you'd have done the research ahead of
               | time. But, for spontaneous stuff, you're totally right.
               | My original statement was thinking about it as a "prep
               | time" exercise, but spontaneous stuff would appear in
               | court. Although, the human lawyer (who should still be
               | simiarly prepared for court) would be there to handle
               | those, possibly with some quick assistance.
        
               | dragonwriter wrote:
               | > if it was AI-assisted lawyer
               | 
               | If it was AI-assisted lawyer, it would be a whole
               | different discussion. Aside from requiring a live feed of
               | interactions to a remote system and other technical
               | details, "lawyers using supportive tools while exercising
               | their own judgement on behalf of their client" isn't
               | controversial the way marketing an automated system as,
               | or as a substitute for, legal counsel and representation
               | is.
        
           | marmetio wrote:
           | The specific scenario doesn't matter. It's illegal to
           | represent someone else in court if you're not a lawyer. There
           | are a lot of things that you can't get a second chance at if
           | your lawyer messes up that suing them can't fix. Lawyers and
           | judges also negotiate, which a machine can't do because
           | nobody feels an obligation to cut them some slack. Also now
           | you're tainting case law with machine-generated garbage.
           | Everything about the justice system assumes humans in the
           | loop. You can't bolt on this one thing without denying people
           | justice.
        
             | cwkoss wrote:
             | If you can sell a book that helps teach someone how to
             | represent themselves, why can't you sell a person access to
             | a robot that helps teach them how to represent themselves?
             | 
             | Why is the robot not "speech"?
        
               | yamtaddle wrote:
               | Can a lawyer get away with doing the same thing if, say,
               | they provide all their advice as .epub files? "Why, your
               | honor, these were merely books..."
               | 
               | [EDIT] That is, would they be immune from e.g.
               | malpractice if they did this so they "weren't
               | representing" the defendant?
        
               | cwkoss wrote:
               | If the book was written about a particular case, that
               | seems like specific legal advice.
               | 
               | If the book was a generalized "choose your own adventure"
               | where you compose a sensible legal argument from
               | selecting a particular template and filling it in with
               | relevant data - use of the book essentially lets the user
               | find the pre-existing legal advice that is relevant to
               | their situation.
               | 
               | Chatbots as a system are arguably a lot more like the
               | latter than the former - its a tool that someone can use
               | to 'legal advise' themselves.
        
               | marmetio wrote:
               | Are you still referring to the scenario from the article,
               | or a different one where it's a resource you use outside
               | of court?
               | 
               | > Here's how it was supposed to work: The person
               | challenging a speeding ticket would wear smart glasses
               | that both record court proceedings and dictate responses
               | into the defendant's ear from a small speaker.
               | 
               | Also, probably wouldn't matter. The interactive human-
               | ish-like nature might cross the line to being considered
               | as counsel, even if you said it wasn't. See my response
               | to your other comment.
        
               | yamtaddle wrote:
               | Right, this strikes me as exactly the kind of "I'm not
               | touching you!" argument that basically never works in a
               | court of law. The law's not like code. "Well it's not any
               | different than publishing a book, so this is just free
               | speech and not legal representation"; "OK, cool, well, we
               | both know that's sophist bullshit, judgement against you,
               | next case."
        
               | [deleted]
        
               | [deleted]
        
               | marmetio wrote:
               | People are often mistaken about how legal technicalities
               | work.
        
               | OkayPhysicist wrote:
               | By providing the words to say and arguments to make to
               | the court, in response to a specific case or
               | circumstance, DoNotPay was giving protected "legal
               | advice" as opposed "legal information". There is
               | ambiguity to find between legal advice and legal
               | information, but that isn't.
        
               | [deleted]
        
               | marmetio wrote:
               | You're still illegally providing legal counsel if you're
               | not a lawyer, or commiting malpractice if you are. Using
               | a machine to commit the same crime doesn't change
               | anything.
               | 
               | "Speech" would be like publishing a book about self-
               | representation. "Counsel" would be providing advice to a
               | defendant about their specific case. The machine would be
               | in the courtroom advising the defendant on their trial,
               | so that's counsel.
        
               | OkayPhysicist wrote:
               | A book gives legal information, not specific to a certain
               | circumstance or case. If your chatbot is considering the
               | specifics of a case before advising on a course of
               | action, it's probably giving legal advice.
        
             | john_the_writer wrote:
             | The tricky bit about your comment. "If you're not a
             | lawyer." In this case who's the "You're".
        
               | marmetio wrote:
               | Not tricky at all. If someone is receiving counsel, then
               | someone is giving counsel. Hiding behind a machine adds a
               | pretty minor extra step to identifying the culprits, but
               | does not create ambiguity over whether they are culpable.
        
           | kadoban wrote:
           | > Does anyone else have a better explanation for why there
           | was such a visceral response?
           | 
           | It doesn't really matter if it'd work well or poorly. Lawyers
           | don't want to be replaced, and being a lawyer entails a great
           | ability to be annoying to delay/prevent things you don't want
           | to happen.
        
           | bavila wrote:
           | > Pretty sure the whole reason why DoNotPay actually exists
           | is because defending against parking tickets didn't actually
           | require a strong defense. The tickets were flawed
           | automation...
           | 
           | I have some past experience working in the courts in my
           | state, and I know there are many judges who are perfectly
           | fine with dismissing minor traffic infractions for no reason
           | other than that they feel like it. If you've got an otherwise
           | clean traffic abstract and sent in a reasonable sounding
           | letter contesting the infraction, these judges probably
           | aren't going to thoroughly read through every word of it and
           | contrast it with what was alleged in the citation. They don't
           | really care about the city making an extra $173 off your
           | parking ticket -- they just want to get through their
           | citation reviews before lunch. Case dismissed.
           | 
           | So I am not surprised at all by the success of DoNotPay for
           | minor traffic infractions. Most traffic courts are heavily
           | strained by heavy case loads. If you give them a reason to
           | throw your case out so they can go home on time, by all
           | means, they will take it.
        
             | lolinder wrote:
             | And I don't think anyone here has an issue with DoNotPay
             | providing pre-trial advice and tips for someone defending
             | themselves. It's bringing that into the courtroom that
             | crosses a line from defending yourself to hiring an AI
             | lawyer, and that line is where I'm very uncomfortable.
        
           | 2muchcoffeeman wrote:
           | > _That said, if it 's such a catastrophically stupid idea,
           | I'm not really sure why it had to be shot down so harshly_
           | 
           | The title of the article seems misleading.
           | 
           | A techbro who doesn't appear to be a lawyer or has any
           | understanding of the law wants to use AI so people can defend
           | themselves. It doesn't seem like any of this was done with
           | input from any bar associations. Without seeing the emails
           | and "threats", and ignoring the emotional language it sounds
           | like these people were helping him out:
           | 
           | >"In particular, Browder said one state bar official noted
           | that the unauthorized practice of law is a misdemeanor in
           | some states punishable up to six months in county jail."
           | 
           | Were these emails "angry" or just stating very plainly and
           | with forceful language, that if you do this without the AI
           | having the appropriate qualifications, you are most probably
           | going to jail?
           | 
           | It even sounds like Browder didn't really widely publicise
           | the fact that a case defended by an AI was about to happen.
           | 
           | > _As word got out, an uneasy buzz began to swirl among
           | various state bar officials, according to Browder. He says
           | angry letters began to pour in._
           | 
           | Really sounds like these letter writers did him a favour.
        
           | jacobsenscott wrote:
           | DoNotPay exists because AI vaporware is the new crypto
           | vaporware, which was the new IoT vaporware, which was the new
           | Web2 vaporware, and so on. Build a "product" on AI and you
           | get (in this case) $28 million in funding. Pull stunts like
           | this to generate a little buzz for the next round of funding.
           | Then bail out with your golden parachute. Now you have
           | experience founding a startup - do it again for $50 million.
        
           | WinstonSmith84 wrote:
           | This is the obvious point: they fear it would work well and
           | they will have to slowly say good bye to their extremely well
           | paid profession.
           | 
           | We are so close from a new disruptive revolution where a lot
           | of jobs (not just lawyers) will be made obsolete. Possibly
           | similar to inventions like assembly lines, or cars. Such an
           | exciting time to be alive!
        
           | dragonwriter wrote:
           | > That said, if it's such a catastrophically stupid idea, I'm
           | not really sure why it had to be shot down so harshly
           | 
           | To avoid the catastrophy that makes it a catastrophically bad
           | idea.
           | 
           | > I assume the real reason it was shot down was out of fear
           | that it would work well. Does anyone else have a better
           | explanation for why there was such a visceral response?
           | 
           | It had already worked badly (subpoenaeing the key adverse
           | witness, who would provide a basically automatic defense win,
           | and one of the most common wins for this kind of case, if
           | they failed to show up.)
        
           | danShumway wrote:
           | > Does anyone else have a better explanation for why there
           | was such a visceral response?
           | 
           | I can't speak for lawyers in general or what everyone's
           | motivations would be, but my initial reaction was that it
           | seemed like a somewhat unethical experiment. I assume the
           | client would have agreed or represented themselves, but even
           | there -- legal advice is tricky because it's _advice_ -- it
           | feels unethical to tell a person to rely on something that is
           | very likely going to give them sub-par legal representation.
           | 
           | Sneaking it into a courtroom without the judge's knowledge
           | feels a lot like a PR stunt, and one that might encourage
           | further legal malpractice in the future.
           | 
           | I assume there are other factors at play, I assume many
           | lawyers felt insulted or threatened, but ignoring that, it's
           | not an experiment I personally would have lauded even as a
           | non-lawyer who wishes the legal industry was, well... less of
           | an industry. The goal of automating parts of the legal
           | industry and improving access to representation is a good
           | goal that I agree with. And maybe there are ways where AI can
           | help with that, sure. I'm optimistic, I guess. But this feels
           | to me like a startup company taking advantage of someone
           | who's in legal trouble for a publicity stunt, not like an
           | ethically run experiment with controls and with efforts made
           | to mitigate harm.
           | 
           | Details have been scarce, so maybe there were other safety
           | measures put in place; I could be wrong. But my understanding
           | was that this was planned to be secret representation where
           | the judge didn't know. And I can't think of any faster way to
           | get into trouble with a judge then pulling something like
           | that. Even if the AI was brilliant, it apparently wasn't
           | brilliant enough to counsel its own developers that running
           | experiments on judges is a bad legal strategy.
        
             | tinsmith wrote:
             | > they felt ... threatened
             | 
             | I'm going to sit on that particular hill and see what
             | happens. Even if DoNotPay's AI is not ready to do the job,
             | the idea that AI could one day argue the law by focusing on
             | logic and precedent instead of circumstance and
             | interpretation is exceedingly threatening to a lawyer's
             | career. No offense intended to the lawyers out there, of
             | course. Were I in your shoes, I'd feel a bit fidgity over
             | this, too.
        
               | shrewduser wrote:
               | i feel like lawyers will be able to legally keep AI out
               | of their field for a while yet. they have the tools at
               | their disposal to do so and a huge incentive.
               | 
               | other fields like journalism not so much.
        
               | nonrandomstring wrote:
               | > i feel like lawyers will be able to legally keep AI out
               | of their field for a while yet. they have the tools at
               | their disposal to do so and a huge incentive, other
               | fields like journalism not so much.
               | 
               | That was my initial response too.
               | 
               | Artists, programmers, musicians, teachers are
               | threatened... but shrug and say "that's the future, what
               | can you do". If lawyers feel "threatened" by AI, they get
               | it shot down.
               | 
               | I suddenly have a newfound respect for lawyers :)
               | 
               | Yet if we think about it, we _all_ have exactly the same
               | tools at our disposal - which is just not playing that
               | game. Difference is, while most professions have got used
               | to rolling with whatever  "progressive technology" is
               | foisted on us, lawyers have a long tradition of caution
               | and moderating external pressure to "modernise". I'm not
               | sure Microsoft have much influence in the legal field.
        
             | intrasight wrote:
             | From what I've read recently, the legal profession is the
             | one most at risk of adverse financial effects from AI. Not
             | the court appearances nor the specialized work. But the
             | run-of-the-mill boilerplate legal writing that is the bread
             | and butter profit center of most first. You bet they are
             | threatened and will push back.
             | 
             | Now the question is this. If an AI is doing something
             | illegal like practicing law, how does one sanction an AI?
             | 
             | Edit: found this:
             | 
             | https://jolt.richmond.edu/is-your-artificial-intelligence-
             | gu...
             | 
             | "A person is presumed to be practicing law when engaging in
             | any of the following conduct on behalf of another"
             | 
             | Every state seems to use the word "person" in their rules.
             | 
             | An AI is not a person, and therefore can't be sanctioned
             | for practicing law - my take anyway.
             | 
             | If non-persons can be prosecuted for illegally practicing
             | law, then those non-persons must have the right to get a
             | license. IMHO.
        
               | dragonwriter wrote:
               | > Now the question is this. If an AI is doing something
               | illegal like practicing law, how does one sanction an AI?
               | 
               | Its not and you don't.
               | 
               | When a _legal person_ (either a natural person or
               | corporation) is doing something illegal like unauthorized
               | practice of law, you sanction that person. The fact that
               | they _use_ an AI as a key tool in their unauthorized law
               | practice is not particularly significant, legally.
        
               | HillRat wrote:
               | > An AI is not a person, and therefore can't be
               | sanctioned for practicing law - my take anyway.
               | 
               | "Personhood" in a legal sense doesn't necessarily mean a
               | natural person. In this case, the company behind it is a
               | person and is practicing law (so no pro se litigant using
               | the company to generate legal arguments). In addition, if
               | you want something entered into court, you need a
               | (natural person) lawyer to do it, who has a binding
               | ethical duty to supervise the work of his or her
               | subordinates. Blindly dumping AI-generated work product
               | into open court is about as clear-cut an ethical
               | violation as you can find.
               | 
               | To your larger point, law firms would _love_ to automate
               | a bunch of paralegal and associate-level work; I 've been
               | involved in some earlier efforts to do things like
               | automated deposition analysis, and there's plenty of
               | precedent in the way the legal profession jumped on
               | shepardizing tools to rapidly cite cases. Increased
               | productivity isn't going to be reflected by partners
               | earning any less, after all.
        
               | Calavar wrote:
               | > Now the question is this. If an AI is doing something
               | illegal like practicing law, how does one sanction an AI?
               | 
               | As far as I'm aware, no LLM has reached sentience and
               | started taking on projects of its own volition. So it's
               | easy - you sanction whoever ran the software for an
               | illegal purpose or whoever marketed and sold the software
               | for an illegal purpose.
        
               | intrasight wrote:
               | Lots of legal software is marketed and sold.
        
               | OkayPhysicist wrote:
               | And legal software is very, very careful to avoid
               | constituting legal advice, as opposed to merely legal
               | information.
        
               | squokko wrote:
               | The legal profession is at the least risk of adverse
               | financial effects from anything, because the people who
               | make the laws are largely lawyers, and will shape the law
               | to their advantage.
        
               | danShumway wrote:
               | Automating boilerplate seems like a great use for AI if
               | you can then have someone go over the writing and check
               | that it's accurate.
               | 
               | I'd prefer that the boilerplate actually be reduced
               | instead, but... I don't have any issue with someone using
               | AI to target tasks that are essentially copy-paste
               | operations anyway. I think this was kind of different.
               | 
               | > If an AI is doing something illegal like practicing
               | law, how does one sanction an AI?
               | 
               | IANAL, but AIs don't have legal personhood, so it would
               | be kind of like trying to sanction a hammer. I don't
               | think that the _AI_ was being threatened with legal
               | action over this stunt, DoNotPay was being threatened.
               | 
               | In an instance where an AI just exists and is Open Source
               | and there is no party at fault beyond the person who
               | decides to download and use it, then as long as that
               | person isn't violating court procedure there's probably
               | no one to sanction? It's likely a bad move, but :shrug:.
               | 
               | But this comes into play with stuff like self-driving as
               | well. The law doesn't think of AI as something that's
               | special. If your AI drives you into the side of the wall,
               | it's the same situation as if your back-up camera didn't
               | beep and you backed into another car. Either the
               | manufacturer is at fault because the tool failed, or
               | you're at fault and you didn't have a reasonable
               | expectation that the tool wouldn't fail or you used it
               | improperly. Or maybe nobody's at fault because everyone
               | (both you and the manufacturer) acted reasonably. In all
               | of those cases, the AI doesn't have any more legal rights
               | or masking of liability than your break pads do, it's not
               | treated as a unique entity -- and using an AI doesn't
               | change a manufacturer's liability around advertising.
               | 
               | That gets slightly more complicated with copyright law
               | surrounding AIs, but even there, it's not that AIs are
               | special entities that have their own legal status that
               | can't own copyright, it's that (currently, we'll see if
               | that precedent holds in the future) US courts rule that
               | using an AI is not a sufficiently creative act to
               | generate copyright protections.
        
               | intrasight wrote:
               | This is different from self-driving or software dev apps.
               | 
               | Law is different because the bar has a legally enforced
               | monopoly on doing legal work.
               | 
               | DoNotPay was being threatened. But they weren't
               | practicing law - they were just providing legal tools.
               | 
               | My point is that were in uncharted legal territory.
               | Perhaps ask the AI what it thinks ;)
        
               | OkayPhysicist wrote:
               | Actually, by telling a client what specific arguments to
               | make in court, they were giving big-L Legal Advice, and
               | thus literally practicing law.
        
             | BolexNOLA wrote:
             | Yeah I feel like you're right on the money on re: the
             | ethics of using someone who _is_ in legal trouble who will
             | have to live with the results. It 's not as sexy but they
             | should just build a fake case (or just use an already
             | settled one if possible) and play out the scenario. No
             | reason it wouldn't be just as effective as a "real" case.
        
               | danShumway wrote:
               | I'd have no objections at all to them setting up a fake
               | test case with a real judge or real prosecutors and doing
               | controlled experiments where there's no actual legal risk
               | and where everyone knows it's not a real court case.
               | You're right that it wouldn't be as attention-grabbing,
               | but I suspect it would be a lot more useful for actually
               | determining the AI's capabilities, with basically zero of
               | the ethical downsides. I'd be fully in support of an
               | experiment like that.
               | 
               | Run it multiple times with multiple defendants, set up a
               | control group that's receiving remote advice from actual
               | lawyers, mask which group is which to the judges, then
               | ask the judge(s) at the end to rank the cases and see
               | which defendants did best.
               | 
               | That would be a lot more work, but it would also be much
               | higher quality data than what they were trying to do.
        
               | refactor_master wrote:
               | > Run it multiple times with multiple defendants, set up
               | a control group
               | 
               | And also
               | 
               | > That would be a lot more work, but it would also be
               | much higher quality data
               | 
               | I don't know much about the field of law, but anecdotally
               | it doesn't strike me as particularly data driven. So I
               | think, even before introducing any kind of AI, the above
               | would be met with a healthy dose of gatekeeping.
               | 
               | Like the whole sport of referencing prior rulings, based
               | on opinions at a point in time doesn't seem much
               | different than anecdotes to me.
               | 
               | But I'd love to be proven wrong though.
        
               | BolexNOLA wrote:
               | And in some ways it's less work! The risks of using a
               | real court case are massive if you ask me. We are a
               | wildly litigious country. No amount of waivers will stop
               | an angry American.
        
               | john_the_writer wrote:
               | It's about volume. A fake case would be expensive to run
               | and running dozens of them a day would be hard.
               | 
               | That said. The consequence of most traffic tickets is
               | increased insurance and a fine. Yes these do have an
               | impact on the accused, but they are the least impactful
               | legal cases, so it would make sense to focus on them as
               | test cases.
        
               | crabmusket wrote:
               | Is this not what moot court is? Seems like a great place
               | to test and refine this kind of technology. The same
               | place lawyers in training are tested and refined.
        
           | g_p wrote:
           | Browder (founder) appeared to also acknowledge that it was
           | not fit for purpose as well [0].
           | 
           | If something that's providing input to a formal legal process
           | (which, let's not forget, means false or inaccurate
           | statements have real and potentially prejudicial
           | repercussions), "makes facts up and exaggerates", then there
           | seems to be no reason they should be talking about taking
           | this anywhere near a courthouse.
           | 
           | This feels a lot like "move fast and break things" being
           | applied - where the people silly enough to use this tool and
           | say whatever it came up with would end up with more serious
           | legal issues. It seems like that only stopped when the
           | founder himself was the one facing the serious legal issues -
           | 'good enough for thee, but not for me'...
           | 
           | I think what many are overlooking is that bad inputs to the
           | legal system can jeopardise someone's position in future
           | irretrievably, with little or no recourse (due to his class
           | action/arbitration waiver). Once someone starts down the road
           | of legal action, there's real consequences if you get it
           | wrong - not only through exposure, but also through
           | prejudicing your own position and making it impossible to
           | take a different route, having previously argued something.
           | 
           | [0]
           | https://twitter.com/SemaforComms/status/1618306993902743555
        
         | chrsig wrote:
         | > LLMs hallucinate, and there is absolutely no space for
         | hallucination in a court of law.
         | 
         | Well, _someone_ doesn't have enough schizophrenia in their
         | life.
        
         | PostOnce wrote:
         | The AI is also _deceptively_ "right". For example, it will cite
         | precedent that has since been superseded.
         | 
         | A non-lawyer representing themselves in a criminal case would
         | overlook that, make a bad/wrong/misinformed argument, and go to
         | jail.
         | 
         | In other fields, it'll lie to you about the thickness of steel
         | pipe required to transport water at a certain pressure, it'll
         | refer to programming libraries that don't exist, and it'll
         | claim something impossible in one breath and happily explain it
         | as fact in the next.
        
         | mountainb wrote:
         | What's interesting is that sometimes it does a great job at
         | something like telling you the holdings of a case, but then
         | other times it gives you a completely incorrect response. If
         | you ask it for things like "the X factor test from Johnson v.
         | Smith" sometimes it will dutifully report the correct test in
         | bullets, but other times will say the completely wrong thing.
         | 
         | The issue I think is that it's pulling from too many sources.
         | There are plenty of sources that are pretty machine readable
         | that will give it good answers. There's a lot of training that
         | can be eked out from the legal databases that already exist
         | that could make it a lot better. If it takes in too much
         | information from too many sources, it tends to get garbled.
         | 
         | There are also a lot of areas where it will confuse concepts
         | from different areas of law, like mixing up criminal battery
         | with civil battery, but that's not the worst of the problems.
        
           | lolinder wrote:
           | > The issue I think is that it's pulling from too many
           | sources. There are plenty of sources that are pretty machine
           | readable that will give it good answers. There's a lot of
           | training that can be eked out from the legal databases that
           | already exist that could make it a lot better. If it takes in
           | too much information from too many sources, it tends to get
           | garbled.
           | 
           | No, this is a common misunderstanding about the way these
           | things work. A LLM is not really pulling from any sources
           | specifically. It has no concept of a source. It has a bunch
           | of weights that were trained to predict the next likely word,
           | and those weights were tuned by feeding in a large amount of
           | text from the internet.
           | 
           | Improving the quality of the sources used to train the
           | weights would likely help, but would not solve the
           | fundamental problem that this isn't actually a lossless
           | knowledge compression algorithm. It's a statistical machine
           | designed to guess the next word. That makes it fundamentally
           | non-deterministic and unsuitable for any task where factual
           | correctness matters (and there's no knowledgeable human in
           | the loop to issue corrections).
        
             | jamesdwilson wrote:
             | If it is never pulling from a source, then why is it able
             | to provide citations?
             | 
             | check the chat bot on you.com:
             | 
             | https://you.com/search?q=list%20top%205%20best%20selling%20
             | c...
        
               | Aune wrote:
               | Asking "Can you cite some legal precedence for lemon law
               | cases?" gives an answer containing
               | 
               | "In California, for example, the California Supreme Court
               | in the case of Lemon v. Kurtzman (1941) held that a
               | vehicle which did not meet the manufacturer's express
               | warranty was a "lemon" and the manufacturer was liable
               | for damages."
               | 
               | I dont think that case exist, there is a first amendment
               | case Lemon v. Kurtzman, 403 U.S. 602 (1971) though.
               | 
               | I can't find any reference to Kurtzman or 1941 in any of
               | the references. I think the answer is that the AI
               | generating the text, and the code supplying the
               | references are distinct and do not interact.
        
               | lolinder wrote:
               | You.com is hugged to death right now, but from what I can
               | see it's a different kind of chatbot. It looks closer to
               | Google's featured snippets than it is to ChatGPT.
               | 
               | That kind of chatbot has different limitations that would
               | make it unsuitable to be an unsupervised legal advice
               | generator.
        
               | dan_mctree wrote:
               | ChatGPT can provide correct citations because somewhere
               | deep in its weights it does lossily encode real texts and
               | citations to real texts. That makes real citations in
               | some cases be its most confident guess for what is
               | supposed to come next in the sentence. But when there
               | isn't a real text it is confident about giving a citation
               | about, but it still feels like a citation should be next
               | in the output, it will happily invent realistic looking
               | citations to texts that have never existed and it has
               | never seen in any sources. On the outside, as readers,
               | it's hard to tell when this occurs without getting an
               | outside confirmation. I'm assuming though that to some
               | degree it is itself aware that a linked citation doesn't
               | refer to anything
        
               | dragonwriter wrote:
               | > If it is never pulling from a source, then why is it
               | able to provide citations?
               | 
               | If you have the training set, and models that summarize
               | text and/or assess similarity, and some basic search
               | engine style tools to reduce or prioritize the problem
               | space, it seems intuitively possible to synthesize
               | probably-credible citations from a draft response without
               | the response being drawn from citations the way a human
               | author would.
               | 
               | Kind of a variant of how plagiarism detection works.
        
               | ascagnel_ wrote:
               | The example you give isn't necessarily a valid one.
               | You're asking for a specific piece of knowable,
               | measurable data -- one that has a single right answer and
               | many wrong answers. Legal questions may have conflicting
               | answers, they may have answers that are correct in one
               | venue but not in another, etc., I have't yet seen any
               | examples of an AI drawing the distinctions necessary for
               | those situations.
        
               | jamesdwilson wrote:
               | thanks very much for this insightful response. i haven't
               | heard this perspective before but it makes sense.
        
               | retrac wrote:
               | One useful way to think of language models is that they
               | are statistical completion engines. It attempts to create
               | a completion to the prompt, and then evaluates the
               | likelihood, in a statistical sense, that the completion
               | would follow the prompt, based on the patterns in the
               | training data.
               | 
               | A citation in legalese is very common. A citation that is
               | similar or identical to actual citations, in similar
               | contexts, is therefore an excellent candidate for the
               | completion. A fake citation that looks like a real
               | citation is also a rather good candidate, and will
               | sometimes squeak past the "is this real or fake?" metric
               | used to evaluate generated potential responses.
               | 
               | This may seem like "pulling from a source" but there is
               | no token, semantic information, or even any information
               | in the model about where and when the citation was
               | encountered. There is no identifiable structure or object
               | (so far as anyone can tell anyway) in the model that is a
               | token related to and containing the citation. It just
               | learns to create fake citations so convincingly, that
               | most of the time they're actually real citations.
        
               | mountainb wrote:
               | This explains some of the particular errors that I've
               | seen when poking and prodding it on complex legal
               | questions and in trying to get it to brief cases.
        
             | skissane wrote:
             | What if we prompt the LLM to generate a response with
             | citations, and then we have program which looks up the
             | citations in a citation database to validate their
             | correctness? Responses with hallucinated citations are
             | thrown away, and the LLM is asked to try again. Then, we
             | could retrieve the text of the cited article, and get
             | another LLM to opine on whether the article text supports
             | the point it is being cited in favour of. I think a lot of
             | these problems with LLMs could be worked around with a few
             | more moving parts.
        
               | theLiminator wrote:
               | Definitely, no one is arguing that an AI lawyer will be
               | the near future, but I can totally see it being good
               | enough for the vast majority of small scale lawsuits
               | within 10-20 years.
        
             | mountainb wrote:
             | If that's true, then it'll never get to the point that it
             | would need to, and its reliability would always be too low
             | to use.
        
               | lolinder wrote:
               | Exactly. The paradigm is wrong, and in order to get past
               | the problems we need a new paradigm, not incremental
               | improvement on LLMs.
        
         | dclowd9901 wrote:
         | This was never meant in good faith. The company did for the PR
         | and got the PR.
        
         | intrasight wrote:
         | >absolute precision is required
         | 
         | LOL - you gotta be kidding. In software, we strive for that -
         | by running tests. In the law, there are no tests.
         | 
         | Not saying that absolute precision isn't required. I know lots
         | of cases were an extra comma, a wrong date, or a signature from
         | the wrong person has cost someone tens of millions of dollars.
         | I would argue that AI-based tools could prevent such HUMAN
         | mistakes.
        
         | bostonsre wrote:
         | In the court itself there would definitely be no way to trust
         | them right now, but I could see AI being a useful research tool
         | for cases. It could find patterns and suggest cases for someone
         | that is qualified to look further into. No idea how hard it is
         | for lawyers to find relevant cases now, but seems like it could
         | be a tough problem.
        
           | lolinder wrote:
           | Yes, absolutely. As a senior software engineer, Copilot has
           | been invaluable in helping me to do things faster. But having
           | an expert human in the loop is key: someone has to _know_
           | when the AI did it wrong.
           | 
           | What's so bad about this experiment isn't that they tried to
           | use AI in law, it's that they tried to use AI _without_ a
           | knowledgeable human in the loop.
        
         | jonathanwallace wrote:
         | > there is absolutely no space for hallucination in a court of
         | law
         | 
         | I wish you were factually correct here.
         | 
         | We've seen time and time again where courts are mockeries of
         | the ideal because of the people in them and the faults they
         | bring with them.
         | 
         | E.g. see
         | https://www.supremecourt.gov/opinions/21pdf/21-418_i425.pdf and
         | the documented proof _in the dissent_ contradicting the claims
         | in the ruling.
        
           | lolinder wrote:
           | That there _exist_ mockeries of justice is not proof that we
           | should _allow_ them.
        
             | jonathanwallace wrote:
             | 100% Agreed.
             | 
             | Rhetorical question: Now what's the enforcement mechanism?
             | 
             | (Also, what's with the downvotes?!?!)
        
           | Miraste wrote:
           | That case, to me, is indicative of a larger problem - it's 75
           | pages of arcane justifications, and yet I already knew how
           | all of the justices had voted just from reading the premise,
           | because like every Supreme Court case in a politicized area
           | it was decided by personal conviction and the rest is post-
           | hoc rationalization.
           | 
           | There is no hallucination on the part of the humans involved,
           | only intellectual dishonesty.
        
         | kjkjadksj wrote:
         | At the same time, all these cases are on the internet
         | somewhere. It wouldn't be too tricky to make a lawyer gpt that
         | is heavily trained on existing legal documents, and is only
         | allowed to quote verbatim from real sources.
        
         | dukeofdoom wrote:
         | That's hilarious, watch some of the trails on courtTV on
         | youtube. The trials are are as culturally biased as you can
         | get. And these ones are the ones we get to see. Judges are not
         | some logical Spock! Free of influence, politics, and current
         | group think. But people who think about their careers, public
         | opinion and know who pays their paycheck. And these are the the
         | competent ones. I remember judge Judy proclaiming "If it
         | doesn't make sense, it's not true!!!", while screaming at some
         | goy. This is pretty much the level of logic you can expect from
         | a judge.
        
           | burkaman wrote:
           | Court TV is not real court and is not anything like real
           | court.
        
             | dukeofdoom wrote:
             | The TV one wasn't, but the youtube one actually it is,
             | https://www.youtube.com/c/courttv
             | 
             | And judges as a group are some of the most deferential to
             | authority people you can get.
        
         | yubiox wrote:
         | LLM means Master of Law degree, kind of an advanced JD used in
         | some specializations like tax. What do you mean here?
        
           | pulisse wrote:
           | In this context it means Large Language Model.
        
           | appletrotter wrote:
           | Large Language Model
        
         | gnicholas wrote:
         | Exactly. I asked it for books on Hong Kong history and it spit
         | out five complete fabrications. The titles were plausible and
         | authors were real people, but none of them had written the
         | books listed.
        
           | LeifCarrotson wrote:
           | Can you follow up and ask it "Is [title] by [author] a
           | published book?
        
             | lolinder wrote:
             | I just tried, and it did issue a correction and gave me
             | five new books, several of which were also fictitious.
             | 
             | But the real question is, could you follow up with a
             | question as it dictating to you live in a courtroom?
        
         | weatherlite wrote:
         | They will pretty soon be able to fact check everything they say
         | when they gain real time internet connectivity. But for now
         | yeah you're right. A year-two from now this won't be true
         | anymore.
        
           | Phlarp wrote:
           | AI hallucinations are going to be the new database query
           | injection. Saying that real time internet connected fact
           | checking will solve that is every bit as naive as thinking
           | the invention of higher level database abstractions like an
           | ORM will solve trivially injectable code.
           | 
           | We can't even make live fact checking work with humans at the
           | wheel. Legacy code bases are so prolific and terrible we're
           | staring down the barrel of a second major industry crises for
           | parsing dates past 2037, but sure LLM's are totally going to
           | get implemented securely and updated regularly unlike all the
           | other software in the world.
        
           | sigmoid10 wrote:
           | I'd also argue that "hallucination" is, at least in some
           | form, pretty commonplace in courtrooms. Neither lawyers' nor
           | judges' memories are foolproof and eyewitness studies show
           | that humans don't even realise how much stuff their brain
           | makes up on the spot to fill blanks. If nothing else, I
           | expect AI to raise awareness for human flaws in the current
           | system.
        
             | lolinder wrote:
             | That the legal system has flaws isn't a good argument for
             | allowing those flaws to become automated. If we're going to
             | automate a task, we should expect it to better, not worse
             | or just as bad (at this stage it would definitely be
             | worse).
        
             | dinkumthinkum wrote:
             | No shot. It is not commonplace there that lawyers and
             | judges are making ridiculous arguments or citing fictional
             | case law.
        
           | aardvarkr wrote:
           | Being connected to the internet may make their information
           | more up to date but they won't stop being confidently
           | incorrect. These language models are amazing at token
           | prediction but it's a mistake to believe that they're
           | actually intelligent.
           | 
           | Give it an inane prompt to better see the difference, like
           | "write me an essay about camels and their value in
           | cybersecurity" which gives us the answer
           | 
           | "The camel is a valuable animal in the cybersecurity field
           | due to its ability to store large amounts of data in its
           | hump. Camels are able to cross vast distances with little
           | water, which makes them ideal for carrying large amounts of
           | data across networks. They are also very sturdy animals, able
           | to withstand harsh conditions and even attacks from cyber
           | criminals. In addition, their long eyelashes protect their
           | eyes from sand and dust, making them perfect for working in
           | dusty environments such as data centers."
           | 
           | Having internet access wouldn't fix this.
        
             | weatherlite wrote:
             | Well it did what you asked him to do , you ordered it to
             | write an essay about camels and their value in
             | cybersecurity. It doesn't understand if you're joking or
             | not or what the purpose of the whole thing.
             | 
             | I asked it this:
             | 
             | Are camels important to cyber security?
             | 
             | Answer:
             | 
             | "No, camels are not typically considered to be important
             | for cyber security. Camels are domesticated mammals that
             | are well-adapted to desert environments and are often used
             | as a means of transportation or for their milk, meat, and
             | hides. Cyber security, on the other hand, is the practice
             | of protecting computer systems, networks, and data from
             | unauthorized access, use, disclosure, disruption,
             | modification, or destruction. The two are not related."
             | 
             | Sounds pretty intelligent to me.
        
               | aardvarkr wrote:
               | A human would say that's ridiculous and tell you why.
               | Google would give you a link to Apache Camel. ChatGPT
               | tells you about how great a camels humps are for data
               | storage.
               | 
               | You tell me which system is intelligent.
        
               | dinkumthinkum wrote:
               | That's preposterous. An intelligent agent, human,
               | mineral, or otherwise, would respond that this is a
               | ridiculous idea and ideally explain the reasons that such
               | is the case. Imagine you are a student and you asked this
               | amazing AI sone question of similar if mildly ridiculed
               | and turn imagine the student didn't already know the
               | answer. Would you think this kind of response would be an
               | example of an intelligent AI?
               | 
               | If it cannot deal with such things without being prompted
               | in such a way that the prompter knows the answer already,
               | how could it deal with complex legal situations with
               | actually intelligent adversaries?
        
           | lolinder wrote:
           | This is overly optimistic. For one, fact checking is much
           | harder than you think it is. Aside from that, there are also
           | many additional problems with AI legal representation, such
           | as lack of body language cues, inability to formulate a
           | coherent legal strategy, and bad logical leaps. We're nowhere
           | near to solving those problems.
        
           | codingdave wrote:
           | How would they be able to parse facts from fiction? Just
           | because something is online does not mean it is true.
        
             | weatherlite wrote:
             | It's not like humans are great at parsing facts from
             | fictions ... how do humans do it?
        
           | stetrain wrote:
           | Yep, it can fact check against authoritative sounding
           | internet articles that were also written by AI.
        
         | humans2 wrote:
         | [dead]
        
         | gfodor wrote:
         | If this is the case, the lawyers should have nothing to fear,
         | and the plaintiff nothing to lose but a parking ticket. I say
         | we stop arguing and run the experiment.
        
           | dclowd9901 wrote:
           | I mean, why not run it as an experiment? Fake Parking ticket,
           | fake defendant, pay a judge to do the fake presiding. If the
           | actual goal was to test it, it would be trivially easy to do.
           | The goal here wasn't to test it, it was to get publicity.
        
           | freejazz wrote:
           | Why is there this narrative that it's a response being done
           | out of fear? That seems to be a huge misunderstanding on your
           | part.
        
           | drakythe wrote:
           | As noted by several lawyers when some of the details of this
           | experiment were revealed: The AI already committed a cardinal
           | sin of traffic court in that it subpoenaed the ticketing
           | officer.
           | 
           | Rule 1 of traffic court: If the state's witness (the
           | ticketing officer) doesn't show, defendant gets off. You do
           | not subpoena that witness, thereby ensuring they show up.
           | 
           | If the AI or its handlers cannot be trusted to pregame the
           | court appearance even remotely well then no way in hell
           | should it be trusted with the actual trial.
           | 
           | You want to run this experiment? Great, lets setup a mock
           | court with a real judge and observing lawyers and run through
           | it. But don't waste a real court's time or some poor
           | bastard's money by trying it in the real world first.
           | 
           | A reminder that "Move fast and break things" should never
           | apply where user's life or liberty is at stake.
        
             | gfodor wrote:
             | agree to your last point, but a traffic ticket beta test
             | does not fail that test
        
               | drakythe wrote:
               | While I agree in absolute terms, in legal terms it is
               | problematic because it sets precedent, which is what the
               | law often runs off of. Better to not breach that line
               | until we're sure it can perform in all circumstances, or
               | rules have been established which clearly delineate where
               | AI assistance/lawyering is allowed and where it isn't.
        
             | JumpCrisscross wrote:
             | > _noted by several lawyers when some of the details of
             | this experiment were revealed: The AI already committed a
             | cardinal sin of traffic court in that it subpoenaed the
             | ticketing officer_
             | 
             | Would you have a source?
        
               | drakythe wrote:
               | https://twitter.com/questauthority/status/161754192110223
               | 360...
               | 
               | Mike Dunford is a practicing attorney. Embedded tweet is
               | of a non-lawyer who screenshotted Joshua Browser DoNotPay
               | CEO) saying the subpoena had been sent. He has since
               | deleted those tweets as DNP backs away from this plan.
        
           | snowwrestler wrote:
           | We can certainly run the experiment, just like we can let a
           | kid touch a hot pan on the stove.
           | 
           | Like the kid, the experiment is not to add knowledge to the
           | world. Every adult knows touching a hot pan results in a
           | burn. Just like everyone who understands how current LLMs
           | work knows that it will fail at being a lawyer.
           | 
           | Instead the point of such an experiment is to train the
           | experimenter. The kid learns not to touch pans on the stove.
           | 
           | In this case it's not fair to metaphorically burn desperate
           | legal defendants so that the leaders and investors in an LLM
           | lawyer company learn their lesson. It's the same reason we
           | don't let companies "experiment" with using random substances
           | to treat cancer.
        
           | lolinder wrote:
           | Non-lawyers aren't banned from giving legal advice because
           | lawyers are trying to protect their jobs, they're banned from
           | giving legal advice because they're likely to be bad at it,
           | and the people who take their advice are likely to be hurt.
           | 
           | Yes, in this case, it would just be a parking ticket, but the
           | legal system runs on precedent and it's safer to hold a
           | strict line than to create a fuzzy "well, it depends on how
           | much is at stake" line. If we know that ChatGPT is not
           | equipped to give legal advice in the general case, there's no
           | reason to allow a company to sell it as a stand-in for a
           | lawyer.
           | 
           | (I would feel differently about a defendant deciding to use
           | the free ChatGPT in this way, because they would be
           | deliberately ignoring the warnings ChatGPT gives. It's the
           | fact that someone decided to make _selling_ AI legal advice
           | into a business model that makes it troubling.)
        
             | fallingknife wrote:
             | They are not scared that it will fail. They are scared that
             | it will succeed. And there's a great reason to allow a
             | company to sell a stand-in for a lawyer. Cost. This isn't
             | targeted at people who can afford lawyers, it's targeted at
             | people who can't, for now at least.
        
               | notahacker wrote:
               | Considering it's so bad it came to people's attention
               | when it _sent a subpoena to make sure someone came to
               | testify against its client when he might have had a
               | default judgement in his favour if they hadn 't_, I think
               | the people who can't afford the lawyers have a lot more
               | to be scared of than the lawyers...
               | 
               | And the reason lawyers are expensive is because _bad_
               | legal advice usually costs far more in the long run.
        
               | NoboruWataya wrote:
               | It's naive to think that a company would develop an AI
               | capable of beating a lawyer in court and then sell it
               | cheaply to poor people to beat traffic tickets. If anyone
               | ever manages to develop an AI that is actually capable of
               | replacing a lawyer, it will be priced way, way out of
               | reach of those people. It will be sold to giant
               | corporations so that they can spend $500k on licence fees
               | rather than $1 million on legal fees. (And unless those
               | corporations can get indemnities from the software vendor
               | backed by personal guarantees they'd still be getting a
               | raw deal.)
               | 
               | These people are being sold snake oil. Cheap snake oil,
               | maybe, but snake oil nonetheless.
        
               | freejazz wrote:
               | Lawyers aren't scared at all. It's traffic court, you are
               | really overstating things. If it was a serious case, it'd
               | be even more ridiculous to put more on the line by being
               | represented by a computer algorithm that isn't subject to
               | any of the licensing standards of an atty, none of the
               | repercussions, and being run by a business that is
               | disclaiming all liability for their conduct.
               | 
               | You know what an attorney can't do? Disclaim malpractice
               | liability!
               | 
               | It'd be wondrous if the esteemed minds of hackernews
               | could put their brain cycles towards actually applying
               | common sense and other things rather than jerking off to
               | edgy narratives about disruption while completely
               | disregarding the relevant facts to focus on what they
               | find politically juicy ("lawyers are scared it will
               | succeed". It's a tautological narrative you are weaving
               | for yourself that completely skirts past all the
               | principles underlying the legal profession and it's
               | development over hundreds of years.
        
               | JumpCrisscross wrote:
               | > _are not scared that it will fail. They are scared that
               | it will succeed_
               | 
               | Lawyers make _heavy_ use of automated document sifting in
               | _e.g._ e-discovery.
               | 
               | Junior lawyers are expensive. Tech that makes them
               | superfluous is a boon to partners. When we toss the
               | village drunk from the bar, it isn't because we're scared
               | they'll drink all the booze.
        
               | rhino369 wrote:
               | >They are not scared that it will fail. They are scared
               | that it will succeed.
               | 
               | Not really. There are more lawyers than legal jobs. A lot
               | of lawyers are toiling for well under 100k a year. People
               | pay 1500 dollars an hour for some lawyers and 150 an hour
               | for others due to perceived (and actual) quality
               | differences. Adding a bunch more non-lawyers isn't going
               | to impact the demand for the 1500 dollars an hour
               | lawyers.
               | 
               | Legal work is expensive because ANY sort of bespoke
               | professional work is expensive. Imagine if software
               | developers had to customize their work for each customer.
        
             | gfodor wrote:
             | Ultimately though the argument you have set up here seems
             | to make it all but impossible for AI to displace humans in
             | the legal profession. If the argument is "precedent rules"
             | then "only humans can be lawyers" is precedent.
             | 
             | I'm not sure if this particular case with this particular
             | technology made sense - but I do think we need to encourage
             | AI penetration of the legal profession, in a way that has
             | minimal downside risk. (For defendants and plaintiffs, not
             | lawyers.) It would be hugely beneficial for society if
             | access to good legal advice was made extremely cheap.
        
               | lolinder wrote:
               | No, if in a hypothetical future we have technology that
               | is capable of reliably performing the role, I don't have
               | a problem with it. This tech is explicitly founded on
               | LLMs, which have major inherent weaknesses that make them
               | unsuitable.
        
             | enraged_camel wrote:
             | >> Non-lawyers aren't banned from giving legal advice
             | because lawyers are trying to protect their jobs, they're
             | banned from giving legal advice because they're likely to
             | be bad at it, and the people who take their advice are
             | likely to be hurt.
             | 
             | But why would the opposing side's lawyers care about this?
             | They presumably want _their_ client to win the lawsuit.
        
               | gameman144 wrote:
               | Incompetent representation is grounds for a mistrial, or
               | successful appeal.
               | 
               | The prosecution wants to win, but they'd prefer to only
               | have to win once.
               | 
               | If you have to go _back_ to trial, you 've already showed
               | your hand, and the defense (who is now competent) can
               | adapt to it.
        
               | freejazz wrote:
               | When did the opposing side's lawyers say anything about
               | this? Are you confused? Law is a regulated profession.
               | The lawyers pointing out that this is illegal aren't on
               | the other side of the case...
        
               | matthewheath wrote:
               | I only have immediate knowledge of UK law, but lawyers
               | will generally have a duty to the court to act with
               | independence in the interests of justice. This tends to
               | mean that in situations where one side are self-
               | represented or using the services of ChatGPT, etc. the
               | opposing side is under a duty not to take unfair
               | advantage of the fact that one side is not legally
               | trained.
               | 
               | They don't have to _help_ them, but they can 't act
               | abusively by, for example, exploiting lack of procedural
               | knowledge.
               | 
               | If they deliberately took advantage of one side using
               | ChatGPT and getting it wrong because the legal foundation
               | of knowledge isn't there for that person, that _could_ be
               | a breach of their duty to the court and result in
               | professional censure or other regulatory consequences.
        
               | greiskul wrote:
               | Well, it is supposed to be a _Justice_ system, and not a
               | game. While it is very easy to forget that, and many of
               | the participants in it clearly don 't behave as such, the
               | outcome of it should be to be just.
        
         | AlexTWithBeard wrote:
         | I don't see the problem here.
         | 
         | > Defendant (as dictated by AI): The Supreme Court ruled in
         | Johnson v. Smith in 1978...
         | 
         | > Judge: There was no case Johnson v. Smith in 1978. Case
         | closed, here's your fine.
         | 
         | Next time please be more careful picking the lawyer.
        
           | torstenvl wrote:
           | Well, the problem is that the defendant has a right to
           | competent representation, and ineffective assistance of
           | counsel fails to fulfill that right.
           | 
           | (Your hypothetical includes a fine, so it isn't clear whether
           | the offense in your hypothetical is one with, shall we say,
           | enhanced sixth amendment protections under Gideon and
           | progeny, or even one involving a criminal offense rather than
           | a simple civil infraction, but...) in many cases lack of a
           | competent attorney is considered structural error, meaning
           | automatic reversal.
           | 
           | In practice, that means that judges (who are trying to
           | prevent their decisions from being overturned) will gently
           | correct defense counsel and guide them toward competence,
           | something that frustrated me when I was a prosecutor but
           | which the system relies upon.
        
             | gptgpp wrote:
             | Seems like the solution is clear then. If the judge gently
             | corrects defense counsel and guides them towards
             | competence, they can just do the same with AI. Then the
             | company can use that data to improve it! Eventually it will
             | be perfect with all the free labor from those judges.
             | 
             | >Judge: that case does not exist. Ask it about this case
             | instead
             | 
             | >AI: I apologize for the mistake, that case is more
             | applicable. blah blah blah. Hallucinates an incorrect
             | conclusion and cites another hallucinated case to support
             | it.
             | 
             | Judge: The actual conclusion to the case was this, and that
             | other case also does not exist.
             | 
             | Isn't that the same thing? Seems fine to me, I know the
             | legal system is already pretty overwhelmed but eventually
             | it might get so good everyone could be adequately
             | represented by a public defender.
             | 
             | Speaking of, I remember reading most poor people can only
             | see the free lawyer they've been assigned for a couple
             | minutes and they barely review the case? I don't understand
             | how that is okay, as long as technically they're competent
             | even if the lack of time makes them pretty ineffective...
        
           | barbazoo wrote:
           | Do judges know about all prior cases or do they check when
           | they hear one referenced? It feels like this could easily
           | slip through, no?
        
             | torstenvl wrote:
             | "Counsel, I'm unfamiliar with the case you've cited. Have
             | you brought a copy for the court? No? How about a bench
             | brief? Very well. I am going to excuse the panel for lunch
             | and place the court in recess. Members of the jury, please
             | return at 1:00. Counsel, please arrive with bench briefs
             | including printouts of major cases at 12:30. Court stands
             | in recess." _bang_
             | 
             | "All rise!"
        
             | dinkumthinkum wrote:
             | That's not how the legal system works. You aren't slipping
             | anything through. Either the judge knows the case, they
             | don't know all the cases, or the judge will research or
             | clerks will research and you will be sanctioned if you try
             | to do so thing unethical.
        
             | AlexTWithBeard wrote:
             | IANAL, but I'd think in this case this is prosecutor's job.
             | 
             | Also, the original post is about the traffic ticket. I'm
             | pretty sure if the judge hears a reference to something he
             | had never heard before, he'll be like "huh? wtf?"
        
             | [deleted]
        
         | TheDudeMan wrote:
         | What would happen if a human lawyer did that?
        
           | ruined wrote:
           | https://en.wikipedia.org/wiki/Disbarment
        
         | eloff wrote:
         | I think you're right here and it's the same reason I see AI as
         | a tool in the software profession. You can use it to speed up
         | your work, but you have to have someone fully trained who can
         | tell the difference between looks good but is wrong, versus is
         | actually usable.
         | 
         | I've been using copilot for half a year now and it's helpful,
         | but often wrong. I carefully verify anything it gives me before
         | using it. I've had one bug that made it to production where I
         | think copilot inserted code that I didn't notice and which
         | slipped past code review. I've had countless garbage
         | suggestions I ignored, and a surprising amount of code that
         | seemed reasonable but was subtly broken.
         | 
         | This will still require a human lawyer (and/or intern,
         | depending on the stakes) to check its output carefully. I am
         | not now, nor have I ever been afraid that AI is coming after my
         | job. When it does, we're dangerously close to general AI and a
         | paradigm shift of such magnitude and consequence that it's
         | called The Singularity. At which point we may have bigger
         | worries than jobs.
        
           | weatherlite wrote:
           | > I've had one bug that made it to production where I think
           | copilot inserted code that I didn't notice and which slipped
           | past code review
           | 
           | I'm not saying this is good but come on. Humans do that all
           | the time, why aren't we so harsh on humans?
           | 
           | > I am not now, nor have I ever been afraid that AI is coming
           | after my job
           | 
           | I am. This thing amazed me, and even if it won't be able to
           | 100% replace humans (which I doubt), it can make juniors
           | orders of magnitude more productive for example. This will be
           | a complete disruption of the industry and doesn't bode well
           | for salaries.
        
             | hgsgm wrote:
             | Broken window fallaccy. Software is so bad, that AI making
             | is more productive won't put us out of work, it will make
             | software suck a bit less. It eill5 make bugfixes and tech
             | debt payback more affordable.
        
             | eloff wrote:
             | I'm in the top x% of my profession, after 20 years of
             | grinding. I'm unafraid. I don't see my salary taking a cut
             | anytime soon. When I was searching for a job back in March,
             | I had a 50% offer to response rate (if the company
             | responded to my application.)
             | 
             | People who are just skating by may have cause for concern.
             | But those are the people with the most to gain from it, so
             | maybe not even them.
             | 
             | Demand is so high in the business, I have trouble imagining
             | that any tool could make a meaningful impact on that.
             | 
             | It would need to multiply productivity by a huge number,
             | and nothing has that impact. Copilot is barely,
             | optimistically, above 10%. I don't really think I get that
             | much, but just for arguments sake.
        
           | lolinder wrote:
           | Yes! I think the legal system would and should look
           | differently at a tool like this in the ear of a licensed
           | lawyer, and AI tools will be invaluable for legal research. I
           | just don't think the output of an AI should be fed directly
           | into a non-lawyer's ear, any more than I think a non-
           | programmer should try to build a startup with ChatGPT as
           | technical co-founder.
        
         | ma2rten wrote:
         | You could probably do much better than ChatGPT if you built a
         | bot specifically for being a lawyer.
        
           | lolinder wrote:
           | Yes, you'd do better, but you'd still have a LLM that is
           | designed to predict the _most likely_ next words. It would
           | still hallucinate and invent case law, it would just be even
           | harder for a non-lawyer to recognize the hallucination.
        
         | barbazoo wrote:
         | Maybe that's where we're headed. It looks like it's becoming
         | more and more okay to just make things up to please your tribe.
         | Why shouldn't that seep into the courtroom. I hope this doesn't
         | happen.
        
         | narrator wrote:
         | If you make a ridiculous argument using confabulated case law
         | as a lawyer, you can be subject to discipline by the state bar
         | and even lose your law license. The legal system's time and
         | attention is not free and unlimited and that's why you need a
         | license to practice law. The judges and so forth don't want to
         | deal with a bunch of people talking nonsense. Who is the lawyer
         | who is putting their reputation on the line for the AI's
         | argument? The people doing this want to say nobody is at fault
         | for the obviously bogus arguments it's going to spout. That's
         | why it's unacceptable.
        
         | ethanbond wrote:
         | DoNotPay seems to know very well what they're doing.
         | 
         | It really doesn't strike me as true that law requires absolute
         | precision. There are many adjacent (both near and far)
         | arguments that can work in law for any given case, since the
         | interpreter is a human. You just need no silly mistakes that
         | shatter credibility, but that's very different from "get one
         | thing wrong and the system doesn't work at all or works in
         | wildly unexpected ways."
         | 
         | Low end law will be one of the first areas to go due to this
         | tech. DoNotPay actually has already been doing this stuff
         | successfully for a while (not in court proceedings themselves
         | though).
        
           | intrasight wrote:
           | >DoNotPay seems to know very well what they're doing.
           | 
           | And even when they lose the win. Look at the publicity they
           | are getting.
        
           | lolinder wrote:
           | There are also many adjacent algorithms that could solve the
           | same problem, but you still need to execute the algorithm
           | correctly. LLMs are not ready for _unsupervised_ use in any
           | domain outside of curiosity, and what DoNotPay is proposing
           | would be to let one roam free in a courtroom.
           | 
           | I'm not at all opposed to using LLMs in the research and
           | discovery phase. But having a naive defendant with no legal
           | experience parroting an LLM in court is deeply problematic,
           | even if the stakes are low in this particular category of
           | law.
        
             | ethanbond wrote:
             | That's nowhere near analogous because between every working
             | algorithm are massive gulfs of syntactic and semantic
             | failure zones. This is _not_ the case with human language,
             | which is the whole power of both language production and
             | language interpretation.
             | 
             | Is it more problematic than this person 1) not being
             | represented, 2) having to pay exorbitant fees to be
             | represented, or 3) having an extremely overworked and
             | disinterested public defender?
             | 
             | I'm not convinced.
             | 
             | The idea that we need to wait to do this stuff until the
             | people whose profession is under threat give "permission"
             | is dismissible on its face and is exactly why we _should_
             | be doing this is as quickly as possible. For what it's
             | worth, I mostly agree with you: I'm doubtful the technology
             | is there yet. But that's a call for each defendant to make
             | in each new case and so long as they're of sound mind, they
             | should be free to pick whatever legal counsel they want.
        
               | freejazz wrote:
               | "I'm not convinced."
               | 
               | Yeah, no surprise. You seem completely unreasonable if
               | you posit only these three options for freakin' traffic
               | court...
        
               | ethanbond wrote:
               | What are the others? If you contest a ticket, you either
               | have representation or you don't. You can either afford
               | good representation or you cannot.
        
               | freejazz wrote:
               | Traffic court attorneys aren't expensive. They are
               | actually incredibly cheap. There's also no public
               | defenders in US traffic courts at least.
        
               | bluGill wrote:
               | Last time I checked with a lawyer about a traffic ticket
               | he told me that it wasn't worth his time to go to court
               | (this was the school provided free legal service and the
               | case was just a burnt headlight that I didn't have
               | verified fixed within a week, you decide if he should
               | have gone to court with me or if he would have for a more
               | complex case), but I was instructed how to present my
               | case. I got my fine reduced at least, which was important
               | as a student paying my own way (I'm one of the last to
               | pay for college just by working jobs between class, so
               | this was important)
        
               | freejazz wrote:
               | Yeah, that's because he's an attorney that gets paid a
               | lot. go to the traffic court and you'll find the ones
               | that don't get paid a lot. That's why they are hanging
               | out in traffic court representing litigants there.
        
               | notahacker wrote:
               | I mean, _not contesting the ticket_ is likely to be a
               | better option than delegating your chances of not being
               | convicted of contempt of court or perjury to the
               | truthfulness and legal understanding of an LLM...
        
               | ethanbond wrote:
               | Sure if your objective is to minimize your own personal
               | exposure. If your goal is to push toward a world where
               | poor folks aren't coerced into not contesting their
               | tickets because they can't afford to go to court or get
               | representation, then maybe it is a good option.
        
               | JumpCrisscross wrote:
               | > _goal is to push toward a world where poor folks aren't
               | coerced into not contesting their tickets_
               | 
               | Invest in a company doing this properly and not pushing
               | half-baked frauds into the world. Supervised learning,
               | mock trials. You're proposing turning those poor folk
               | into Guinea pigs, gambling with their freedom from afar.
        
               | ethanbond wrote:
               | This company has been doing this stuff _for years_. Yes
               | this is a big step forward but it's not from zero, as
               | you're suggesting. What makes you think they haven't been
               | doing mock trials and tons of supervised learning?
               | 
               | And no, I'm not. I don't think Defendant Zero (or one, or
               | two, or 100) should be people whose lives would be
               | seriously affected by errors. I'm pretty sure DNP doesn't
               | either.
        
               | JumpCrisscross wrote:
               | > _What makes you think they haven't been doing mock
               | trials and tons of supervised learning?_
               | 
               | The CEO tweeting they fucked up a default judgement [1].
               | That not only communicates model failure, but also lack
               | of domain expertise and controls at the organisational
               | level.
               | 
               | [1] https://mobile.twitter.com/questauthority/status/1617
               | 5419211...
        
               | freejazz wrote:
               | It's not a licensed attorney.. it can't be disbarred.. it
               | has no legal duties to its clients. What don't you get??
        
               | notahacker wrote:
               | I prefer a world in which people pay a small fine or make
               | their own excuses rather than pay the fine money to a VC-
               | backed startup for access to an LLM to mount a legally
               | unsound defence that ends up getting them into a _lot_
               | more trouble, yes.
               | 
               | If your goal is to ensure poor folks are pushed towards
               | paying an utterly unaccountable service further fees to
               | escalate legal cases they don't have the knowledge to win
               | so the LPs of Andreesen Horowitz have that bit more
               | growth in their portfolio, I can see how you would think
               | differently.
        
               | ethanbond wrote:
               | Yes yes, Andreessen bad and assumed negative outcomes
               | bad, therefore my solution (nil) is just fine.
        
               | freejazz wrote:
               | This is a false dichotomy that you are making up in order
               | to politically justify your narrative that is otherwise
               | completely made up nonsense best described as legal
               | malpractice.
        
               | JumpCrisscross wrote:
               | > _1) not being represented, 2) having to pay exorbitant
               | fees to be represented, or 3) having an extremely
               | overworked and disinterested public defender?_
               | 
               | You're leaving off being put in jail for contempt of
               | court, perjuring oneself, making procedural errors that
               | result in an adverse default ruling, racking up fines,
               | _et cetera_. Bad legal representation is ruinous.
        
               | ethanbond wrote:
               | Gee good thing everyone in court gets good representation
               | at reasonable prices eh?
               | 
               | I get that lawyers think their profession is important
               | (it is) and that by and large they're positive for their
               | clients (they are), but there are a lot of people who
               | simply do not have access to any worthwhile
               | representation. I saw Spanish-speaking kids sent to
               | juvenile detention for very minor charges conducted in
               | English with no interpreter and a completely useless
               | public defender. So in my view _that_ is the alternative
               | for many people, not Pretty Decent Representation.
               | 
               | There are people who can stomach the downside risks to
               | push this tech forward for people who cannot stomach the
               | downside risks _of the current reality._
        
               | JumpCrisscross wrote:
               | > _lot of people who simply do not have access to any
               | worthwhile representation_
               | 
               | They'd be better off representing themselves than
               | trusting a non-lawyer "lawyer" that subpoenas no-show
               | cops. Not another Andreessen-backed cluster.
        
               | ethanbond wrote:
               | Do you know that? How do you know that's what the model
               | would do? How do you know that the defendant doesn't have
               | an 80 IQ and an elementary school grasp on English? Do
               | you think this doesn't happen today and that these people
               | don't get absolutely _dragged_ by the system?
        
               | notahacker wrote:
               | We know that subpoenaing [possibly] no-show cops is what
               | the model will do because _that is what the CEO says the
               | model did_ in the run up to this particular case.
               | 
               | Someone with an 80 IQ and an elementary school grasp of
               | English is going to get absolutely _dragged_ by the
               | system with or without a  "robot lawyer" if they insist
               | on fighting without competent representation, but they'd
               | probably still stand a better chance of getting off a
               | fine on a technicality if they weren't paying a VC-backed
               | startup to write a letter make sure the cops turned up to
               | testify against them.
               | 
               | They'd also be more likely to _not_ get absolutely
               | dragged if they listened to a human that told them not to
               | bother than a signup flow that encouraged them to
               | purchase further legal representation to pursue the case.
        
               | [deleted]
        
           | freejazz wrote:
           | "DoNotPay seems to know very well what they're doing."
           | 
           | What makes you think that? What they are doing is illegal.
           | It's the unlicensed practice of law.
        
             | SuoDuanDao wrote:
             | So if a chat program can pass the bar exam, it's okay?
             | Because I would bet that if a program can represent someone
             | semi-competently in court, passing the bar exam which needs
             | an objective marking criterion would be trivial by
             | comparison.
        
               | giantg2 wrote:
               | Most states also require a law degree in addition to
               | passing the Bar.
               | 
               | But a fun fact is that magistrates generally aren't
               | required to pass the Bar, nor hold a law degree. Most
               | states require extremely basic courses of 40 or so hours
               | of training. I know of a magistrate that has tried
               | numerous times to pass the Bar and has failed. I'm not
               | sure how much competence our system mandates.
        
               | freejazz wrote:
               | Cart before the horse... you have to pass the bar before
               | you get the chance to represent someone semi-competently
               | in court. Generally, lawyers have 5 years of experience
               | before they are considered competent enough to be semi-
               | competently represent someone in court.
        
             | notahacker wrote:
             | Alternative spin on the "know very well what they're
             | doing": they know very well that it's unlicensed practice
             | of law and they'd have to withdraw from the case.
             | 
             | But doing so generates lots of publicity for their online
             | wizards that send out boilerplate legal letters.
             | 
             | The CEO tweeted about the system subpoenaing the traffic
             | cop. If they actually built a system which is so advanced
             | it can handle a court case in real time and yet so ignorant
             | of the basics of fighting a traffic ticket it subpoenas the
             | traffic cop it's... a very odd approach to product
             | management for the flagship product of a legal specialist,
             | and a bit scary to think anyone would use it. Easier to
             | make the mistake of claiming your flagship system does
             | stuff it shouldn't be doing if it's just vaporware and you
             | haven't put too much thought into what it _should_ do
        
             | ethanbond wrote:
             | Their track record? Seems like this is the first you're
             | hearing of them, but this is just the latest (and yes, most
             | ambitious) experiment. They've been successfully using
             | technology to help normal people defeat abusive systems
             | built by "the professions" for years.
        
               | gamblor956 wrote:
               | Most people successfully contest traffic tickets by just
               | showing up to traffic court. It really is that easy.
               | 
               | So, DoNotPay is basically just a scam taking money from
               | people without providing anything of value.
        
               | yunwal wrote:
               | They've been scamming people for years and hitting them
               | with BS credit card charges. Just another abusive system
               | tacked onto the rest.
        
               | newswasboring wrote:
               | That's a pretty definitive and bold assertion. Any
               | source?
        
               | 55555 wrote:
               | https://www.trustpilot.com/review/donotpay.com
        
           | mattwad wrote:
           | It probably should not _replace_ a lawyer, though. Just
           | enable a single lawyer to handle more cases
        
         | dragonwriter wrote:
         | Ironically, I recently saw a convo on Twitter where someone was
         | showing off a ChatGPT generated legal argument, and, it had
         | done exactly that, hallucinated a case to cite.
        
         | phpisthebest wrote:
         | >>The legal profession is perhaps the closest one to computer
         | programming
         | 
         | There is a ton of people using ChatGPT for programming.... so
         | much so that I wonder if we will have a crisis in skills as
         | people forgo how to write code.
         | 
         | sysadmin circles have tons people celebrating how they will not
         | have to learn powershell now as an example
        
           | mattwad wrote:
           | i use copilot every day, and it's never written more than one
           | line at a time that didn't need adjusting. If you are letting
           | an AI write code for you without knowing what it does, you
           | should not be in programming. I would probably just fire
           | someone if their response to me was ever "well the AI wrote
           | that bit of code that deleted our DB, i didnt know what it
           | did"
        
           | godshatter wrote:
           | ChatGPT has no concept of a model for code, no understanding
           | of syntax or logic or even language keywords. It just pulls
           | info out of it's backside that sounds believable like a lot
           | of people do, it's just better at it. I suspect the immediate
           | future will be resplendent with tales of AI-generated code
           | causing catastrophes.
        
           | gptgpp wrote:
           | I agree, but it doesn't have to be that way. I've been
           | learning a couple new languages and frameworks lately and
           | found it really accelerates my learning, increases enjoyment,
           | and is good at explaining the silly mistakes that I make.
           | 
           | So it can enhance skills just as much as it can atrophy them.
           | 
           | And I'm okay with some skills atrophying... I hate writing
           | regular expressions, but they're so useful for some
           | situations. It's a shame chatGPT fails so hard at them,
           | otherwise I would be content to never use a regex again.
        
           | dinkumthinkum wrote:
           | There's a lot of people doing a big circle jerk over ChatGPT
           | with wild ideas of singularity and oddly eagerly awaiting the
           | end of white collar work. Whatever. I agree that programmers
           | being obsessed with it can lead to skill atrophy. But, in
           | reality, there are many people that are very technical and
           | are not becoming reliant on these things for coding.
        
         | enoch2090 wrote:
         | > Defendant (as dictated by AI): The Supreme Court ruled in
         | Johnson v. Smith in 1978... > Judge: There was no case Johnson
         | v. Smith in 1978. > Defendant (as dictated by AI): Yes, you are
         | right, there was no case Johnson v. Smith in 1978.
        
           | AnimalMuppet wrote:
           | At which point you have seriously annoyed the judge, and made
           | him/her/justice-of-other-gender scrutinize your points _much_
           | more skeptically.
        
         | computerex wrote:
         | Yes it's true that LLMs hallucinate facts, but there are ways
         | to control that. Despite the challenges they can spit out
         | perfectly functional code to spec to boot. So for me it's not
         | too much of a stretch to think that it'd do a reasonably good
         | job at defending simple cases.
        
         | criddell wrote:
         | It would be kind of funny to hear an argument made by an LLM
         | that has digested all the sovereign citizen bs.
        
           | [deleted]
        
         | winReInstall wrote:
         | Eh, what if it was trained on all the previous cases ever to
         | have existed? I think it could be pretty good, as long as it
         | detects novelty as a to flag and confirm error case.
        
           | bastawhiz wrote:
           | That's not the point. LLMs work by predicting what text to
           | generate next. It doesn't work by choosing facts, it works by
           | saying the thing that sounds the most appropriate. That's why
           | it's so confidently wrong. No amount of training will
           | eliminate this problem: it's an issue with the architecture
           | of LLMs today.
        
           | lolinder wrote:
           | You could layer another system on top of the LLM generations
           | that attempts to look up cases referenced and discards the
           | response if they don't exist, but that only solves that
           | particular failure mode.
           | 
           | There are other kinds of failures that will be much harder to
           | detect: arguments that sound right but are logically flawed,
           | lost context due to inability to read body language and tone
           | of voice, and lack of a coherent strategy, to name a few.
           | 
           | All of these things could theoretically be solved
           | individually, but each would require new systems to be added
           | which have their own new failure modes. At our current
           | technological level the problem is intractable, even for
           | seemingly simple cases like this one. A defendant is better
           | off defending themselves with their own preparation than they
           | are relying on modern AI in the heat of the moment.
        
             | dinkumthinkum wrote:
             | It's bizarre that anyone that supposedly works in
             | technology even thinks this is realistic. This betrays a
             | large lack of knowledge of technology and a child like
             | understanding of the legal system.
        
           | dinkumthinkum wrote:
           | It fails at determining if a number is prime and provides
           | bogus arguments to such effect. You think it would make sense
           | for this to argue complex legal cases with strategy? This
           | isn't Go or chess.
        
       | ergocoder wrote:
       | People choosing to represent themselves with an AI assistant is a
       | big no. It is too dangerous.
       | 
       | Choosing not to vaccinate is a yes.
       | 
       | Owning a gun is a yes.
       | 
       | Wearing bullet proof vests and shooting each other for fun is
       | also a yes. https://www.nwaonline.com/news/2020/aug/22/case-
       | dropped-in-b...
       | 
       | What a strange way of having freedom.
        
       | bilekas wrote:
       | Maybe I'm not fully understanding here but isn't the defendant
       | entitled to represent themselves?
       | 
       | Couldn't the defendant just do what the lawyer had planned?
       | 
       | The complaining letters make it out that you cannot participate
       | in law if you're not a bar member..
        
         | LelouBil wrote:
         | No, the problem is that you need to be licensed to give legal
         | advice. And the chatbot is not a licensed lawyer.
        
           | DangitBobby wrote:
           | It's also not a person.
        
       | whycome wrote:
       | How long until someone deems this 'aiphobic'?
        
       | sleepybrett wrote:
       | I'm starting to side with the Butlerian Jihad, "Thou shalt not
       | make a machine in the likeness of a human mind"
        
       | sbaiddn wrote:
       | "Move fast and break things" should be followed by "commit
       | criminal negligence and go to jail!"
        
       | alpineidyll3 wrote:
       | Which of Adam Smith's principles was it that said that state
       | regulations are needed to secure the wages of labor?... Oh wait,
       | that's Marx.
        
       | jobs_throwaway wrote:
       | Sounds like those lawyers are scared!
        
       | pseingatl wrote:
       | Their mistake was using this tool in a criminal case. They could
       | have rolled it out for arbitrations and/or mediations,
       | proceedings which do not necessarily require legal counsel.
       | 
       | The legal system has various functions. One of these is
       | determining the facts. Think of it as agreeing on what prompt to
       | give the AI. Once the facts are determined, everything else
       | pretty much follows.
       | 
       | If Fact A then Consequence B. If the parties can agree on the
       | facts, AI will tell them the likely consequences. But as a fact-
       | finding tool, in present form AI is not useful.
       | 
       | Maybe the next iteration.
        
       | drKarl wrote:
       | There's two separate issues at play here. On the left hand, it's
       | true that ChatGPT, while impressive, on its current incarnation
       | sometimes returns correct responses, and some other times it
       | returns seemingly correct hallucinations. Unless the accuracy and
       | certainty is not over a certain threshold, say 95% for example, I
       | don't think it would be safe to use it for critical use cases,
       | like acting as a lawyer, as it very well might hallucinate laws
       | or prior cases, etc.
       | 
       | On the right hand, lawyers see the writing on the wall and see AI
       | as a threat to their really lucrative business, and they'll use
       | any means at their disposal to outlaw it or slow the adoption of
       | AI technologies to replace lawyers.
       | 
       | Hell, I'm a software engineer and I see the writing on the wall
       | too, and I see AI as a threat as well. I also acknowledge the
       | limitless opportunities. I'm at equal parts excited and terrified
       | by what's coming.
        
       | pierrebai wrote:
       | Another instance of the irreconcilable dichotomy of: "no one
       | shall ignore the law" and "only lawyers can understand the law".
        
         | GuB-42 wrote:
         | Only lawyers can give legal advice, big difference.
         | 
         | You can represent yourself in court if you want to (generally,
         | you don't) but if you want to offer that service to others, you
         | need to be a licensed lawyer. It is the same for many
         | professions.
        
           | pierrebai wrote:
           | You are saying exactly what I'm saying, only with different
           | words. As life goes on, this is a trend I get more and more
           | acutely aware of: people bias, opinion and values make them
           | re-interpret what others say to make it fit in their already
           | made up mind about what the conversation is about.
           | 
           | I was saying that not knowing the law is not a defense yet
           | the laws are so complex that an expensive expert that
           | dedicated all their study to the law is practically (as in,
           | yes, you have the option to not use a lawyer) required in
           | court.
           | 
           | It is not the same for many profession. In no other
           | profession are non-experts expected and required to know the
           | matter.
        
       | freitzkriesler wrote:
       | Lawyer Bar Associations trying their best to starve off the
       | inevitable. If only ChatGPT was trained with case law and law
       | school texts. Then when they sue, the ChatGPT model can defend
       | itself. I'm affected by this too, but watching lawyers be
       | rendered obsolete makes me very excited.
        
         | samtp wrote:
         | You're ignoring the fact that ChatGPT isn't trained to be
         | correct or logical, it's trained to be semantically
         | understandable and coherent. Which is an absolutely terrible
         | model to rely on in a court room.
        
           | freitzkriesler wrote:
           | I know which is probably why this was a proof of concept with
           | what would have been abysmal results.
        
             | samtp wrote:
             | This sentiment seems to be completely at odds with your
             | first comment.
        
               | freitzkriesler wrote:
               | Mate I'm not stupid. It was a proof of concept and even
               | if the end result was going to be a spectacular failure,
               | it doesn't exclude the fact that the bar associations are
               | desperately trying to fight this tooth and nail.
               | 
               | Learn to read between the lines, not everything is an
               | IDE.
        
               | samtp wrote:
               | My point is you're ignoring the fact that the bar
               | association has a distinct interest in having the courts
               | run smoothly and making sure lawyers in court are
               | competent. So it is highly in their interest for a
               | courtroom to not become a mockery because of a chatbot.
               | 
               | Not everything associations like the bar does is in bad
               | faith or contrary to the public's interest.
        
       | jtode wrote:
       | This is gonna be the funniest battle ever. AI is not gonna go
       | away and it will be better at law than humans. I'll take any
       | odds.
        
         | gfodor wrote:
         | Same. Lawyers are screwed - they have been an insanely overpaid
         | profession forever, this is going to be absolutely devastating
         | for them. Also, there isn't a whole lot of leverage in
         | lawyering compared to the other professions that AI is poised
         | to dominate, so it feels like lawyers in particular are going
         | to have a hard time transforming themselves into a new
         | profession that is symbiotic with AI systems. This is a case
         | where the human part may entirely go away for large swathes of
         | cases.
        
           | jtode wrote:
           | You love to see it.
        
       | jwsteigerwalt wrote:
       | Unfortunately, the existence of the law cartel does not make this
       | ok. The solution is to break down the law cartel so that there
       | are more mediocre bar members that can facilitate the AI argument
       | being made.
       | 
       | You can represent yourself (where applicable) or you can have
       | representation. That representation is an officer of the court
       | and must adhere to professional standards to maintain their
       | license and bar membership. Even if you could force an algorithm
       | to adhere to professional standards, it's unlikely that it could
       | be legitimately considered an officer of the court any time soon.
        
       | DonHopkins wrote:
       | Judge:                   How does the defendant plea?
       | 
       | Defendant:                   ChatGPT is at capacity right now.
       | Get notified when we're back.              Write an acrostic poem
       | about the status of ChatGPT.              C: ChatGPT is currently
       | down.         H: Huge demand has caused the site to crash.
       | A: All users will have to wait.         T: Time is needed for the
       | servers to catch up.         G: Go grab a coffee and check back
       | soon.         P: Patience is key in this situation.         T:
       | Trust that the team is working hard to fix it up.
        
       | jwithington wrote:
       | Has anyone who has covered this actually used DoNotPay?
       | 
       | It's just such a poor product. I'm biased towards being pro-AI
       | lawyer, but there's no reason to think that an app that can't
       | execute the basics will push the technological frontier of the
       | legal field.
        
       | curtis3389 wrote:
       | Kafka hit the nail on the head with this one: https://www.kafka-
       | online.info/before-the-law.html
        
       | giantg2 wrote:
       | I'd rather have judges at the lower levels rely on some AI
       | assistance. The level of utter incompetence that I've witnessed
       | personally has been hard to comprehend.
        
       | Barrin92 wrote:
       | Absolutely atrocious stunt on the part of that company. A
       | glorified chatbot is not itself legally accountable or trained
       | lawyer and it cannot seriously represent anyone. I assume the
       | entire purpose of this was to bait the obvious shutdown and then
       | complain on the internet about the legacy lawyers or whatever to
       | generate press. Reminds me of the 'sentient AI' Google guy.
       | 
       | Is this going to be the new grift in the industry?
        
         | mach1ne wrote:
         | I'm not familiar just how large cases it's supposed to tackle,
         | but for small stuff like parking tickets a few solid legal
         | arguments may be enough and you don't need to pay for tens of
         | hours of lawyer's time.
        
       | renewiltord wrote:
       | They're obviously just protecting their turf. Just like the
       | Oregon "professional engineers" or whatever.
       | 
       | Compete or legally disallow. When someone chooses the second
       | option you know they can't do the first.
        
       | ulizzle wrote:
       | We are calling these things A.I, as if they were really
       | intelligent; do they understand the concept of qualia?
        
       | 1024core wrote:
       | If you are being sued in a court, aren't allowed to defend
       | yourself? Can't you act like your own lawyer? Then what's the
       | problem in using an AI?
        
         | dclowd9901 wrote:
         | Despite strict procedure and rules, the court is a place for
         | human common sense to also intervene. If it walks like a duck
         | and quacks like a duck...
         | 
         | In other words, you being a mouthpiece for an AI would likely
         | be seen as sufficiently separate enough from "representing
         | yourself" as to be not representing yourself at all.
         | 
         | I would imagine, in SCOTUS, were a case argued around allowing
         | folks to "represent themselves" like this, one of the first
         | questions a justice would ask is "Suppose, instead of an AI
         | computer talking through the defendant, an actual practicing
         | lawyer was talking through the defendant through the earpiece.
         | In that case, is the person still actually representing
         | themselves?"
        
       | ghusto wrote:
       | This is the first instance of the technology I've seen that made
       | sense to me. Sure, for something like law, it actually makes
       | sense to have AI-assisted -- or even full AI -- sessions.
       | 
       | Says something about our priorities that _this_ is what gets shut
       | down, but the battle that artists are having trying to stop their
       | work being used for training, is dismissed as Ludditism.
        
       | brokenodo wrote:
       | Aside from the question of whether this plan was legal, DoNotPay
       | seems like a terrible product. The results it generates seem
       | laughably bad, and it's questionable whether "AI" is actually
       | involved when it takes them literal days to generate a document
       | for certain types of requests.
       | https://www.techdirt.com/2023/01/24/the-worlds-first-robot-l...
        
         | ericd wrote:
         | Most of these things seem to be hybrids, humans overseeing
         | automation, with varying degrees of human involvement. Guessing
         | they at least have a review queue for non-boilerplate docs
         | about to go out.
        
         | g_p wrote:
         | Indeed - and this is not new. Many years ago, I took a look to
         | see what all the fuss was all about.
         | 
         | From start to end, he/his product seemed amateurish. From
         | giving out a herokuapp.com subdomain in early press releases
         | (which were republished on major sites), that was then no
         | longer in use (allowing it to be taken over), through to the
         | actual generated output.
         | 
         | When I looked at a letter it generated, it was laughable. The
         | "chat bot" was simply asking questions, and capturing the
         | (verbatim) response, and throwing it straight into the
         | template. No sanity checking, no consistency, etc. There was
         | absolutely no conversational ability in the "chat bot" - it was
         | the equivalent of a "my first program" Hello World app, asking
         | you your name, then greeting you by name.
         | 
         | It wasn't capable of chat, conversation, or comprehension.
         | Anything you entered was blindly copied out into the resulting
         | letter. Seems nothing has changed.
        
       | the_arun wrote:
       | We will soon see AI Physicians, Counsellors etc., Is IBM's Watson
       | still in use?
        
       | shireboy wrote:
       | Rabbit trail: What is the state of non-ai tech to assist in
       | building or defending cases? I vaguely know a thing called
       | LexusNexus exists, but not what it does. Is there a system where
       | a lawyer can search, find related law, then fill out a form to
       | generate a draft case or defense? It seems to my un-lawyer self
       | the legal system could be codified into a rules engine: IF these
       | legal inputs, THEN these outputs. Or reverse: IF these desired
       | outputs, THEN these inputs need to be met.
        
         | throwaway17_17 wrote:
         | Lexis is an information aggregation company. They take a large
         | quantity of US law and Court opinions and publish them. These
         | sources are then linked together using a relatively simple tag
         | style system. I'm general you can get forms for very specific
         | and predictable case types, but for a large portion of
         | practice, outside of initial filings the fact-specific nature
         | of subsequent pleadings are harder to formalize.
        
       | DaTaLa33 wrote:
       | We will get rid of them
        
       | joshuaheard wrote:
       | "Unauthorized practice of law" only applies to people, not tools.
       | AI is a tool. DoNotPay was not selling legal advice, only a tool
       | to understand law. It is no different if they were selling a code
       | book, or other text that the defendant uses himself. I think the
       | real fear is that AI will supplant the entire legal profession.
       | 
       | The legal profession went through a similar struggle when Nolo
       | published software that could draft basic legal documents by
       | filling in the blanks. Nolo won.
        
         | anotherman554 wrote:
         | You are making the assumption that a company advertising a robo
         | lawyer isn't engaged in unauthorized practice of law, which is
         | rather odd since I bet the Nolo books don't hold themselves out
         | as a lawyer, robo or otherwise.
         | 
         | You are also making the assumption any tool is allowed in a
         | courtroom, which is obviously not correct. You wouldn't be able
         | to use a nolo book while testifying what you witnessed at a
         | crime scene, either.
        
       | shmerl wrote:
       | _> As word got out, an uneasy buzz began to swirl among various
       | state bar officials, according to Browder. He says angry letters
       | began to pour in. _
       | 
       | So an organization with the sole purpose of gatekeeping and anti-
       | competitive market control does the anti-competitive market
       | control. Why is this bar idea itself still legal?
        
       | ChrisMarshallNY wrote:
       | Well, the comments about the company not doing things correctly
       | (licensing the algorithm), are correct.
       | 
       | It's actually critically important to have some kind of license
       | to represent people in court, as well as someone to pillory, if
       | they screw up, as it prevents some _truly_ evil stuff from
       | happening (I have seen _many_ people robbed blind by licensed
       | lawyers, and it would be a thousand times worse, if they could be
       | represented by anyone that sounds convincing enough). The stakes
       | are really high, and we shouldn 't mess around (not in all cases,
       | of course, but it would really suck, if someone got the needle,
       | because a programmer forgot a semicolon).
       | 
       | That said, I think it's only a matter of time, before a
       | significant amount of legal stuff is handled by AI. AI shines, in
       | environments with lots of structure and rules; which pretty much
       | defines the law.
        
       | tahoeskibum wrote:
       | But then again, Chat GPT just passed law school exams, albeit at
       | C+ in blind testing.
       | 
       | https://www.cnn.com/2023/01/26/tech/chatgpt-passes-exams/ind...
        
       | Maursault wrote:
       | Bar associations have set themselves up to fail. Enroll the AI in
       | law school, let it get a degree, and sit for the bar exam in
       | every state. Problem solved. And then the bar associations can
       | suck it.
        
         | vkou wrote:
         | And when the AI starts fabricating fictitious citations for
         | cases, disbar it, and ban it and its developers from practicing
         | law?
        
         | daveslash wrote:
         | Enroll the AI at law school and let it get a degree? Reminds me
         | of a whimsical shower thought I had once.... Create a business
         | that owns itself, and write an AI to run it. Owns its own bank
         | accounts and everything. Maybe the business is just selling
         | stickers online or something equally lightweight, but give it
         | all the legal status of company. But a company with zero human
         | owners or any human employees. Make it a rebuke of the
         | "corporations are people" idea. Just a zombie out there selling
         | products/services, and making money that no human can ever
         | touch again....
         | 
         | If it sounds crazy/stupid, remember - I did say "whimsical
         | shower thought" ;)
        
       | sandworm101 wrote:
       | AI is able to sometimes make a valid argument, but when it comes
       | to specific facts and rules it drops the ball. Expert knowledge
       | requires actual understanding, not fitting patterns and
       | transposing words. Take a look at the following vid from a real
       | expert in a paticular field (military submarines). Look at how
       | ChatGPT falls appart when discussing "Akula" subs. It can read
       | english but clearly does not understand what that word means in
       | context. It also confidently cites incorrect facts, something
       | that would be very dangerous in any court.
       | 
       | Hint: Akula is a nato reporting name. Nobody calls them the shark
       | class of subs, even russians attach that name to a very different
       | class.
       | 
       | https://youtu.be/H8DIwNfIijU
        
       | 71a54xd wrote:
       | I wonder why he didn't continue to apply pressure to go ahead,
       | and worse case scenario just flee to europe if in-fact angry
       | prosecutors actually tried to jail him.
       | 
       | Ironically, the outcome of this whole saga is the most _lawyer_
       | outcome it could 've been... by way of Lawyers advocating to keep
       | legal protection out of reach from the common man and inserting
       | themselves between real innovation and progress for financial
       | gain.
        
       | [deleted]
        
       | squarefoot wrote:
       | AI lawyers indeed look scary, but I can't but think of the
       | implications should they become one day way cheaper than carbon
       | based ones. Imagine all those cases where $BIGCORP is technically
       | in the wrong, but today can screw some poor sap just by dragging
       | things until he can't afford legal defense anymore.
        
         | saint_fiasco wrote:
         | The $BIGCORP can use the tool too, it's not exactly asymmetric.
         | Imagine what a patent troll could accomplish with something
         | like this if it were allowed.
        
         | autoexec wrote:
         | I'm guessing that this tech won't end up creating equality for
         | poor people in the justice system.
         | 
         | Either it won't work as well as actual lawyers in which case it
         | will become the only option poor people have (basically
         | replacing public defenders), or it will work just as well as
         | human representation (or even better), in which case 'AI
         | lawyer' companies will charge just as much if not more for
         | their services as the human lawyers do.
         | 
         | DoNotPay may be a humble start up now, but if the tech proves
         | to be effective they (and other future AI companies) will
         | eventually fleece their customers just as human lawyers do
         | today. Not because they have to, or because it is in any way
         | justified, but just because they can and doing it would make
         | them richer.
        
           | sh4rks wrote:
           | What about when the lawyer tech becomes open source and you
           | can have your own personal lawyer in your pocket at all
           | times?
        
             | squarefoot wrote:
             | Open Source _and_ disconnected. AI is a wonderful tool that
             | will turn into a nightmare when used to take advantage of
             | people. I wouldn 't want my "pocket lawyer" to send our
             | private conversations anywhere but my personal offline
             | space. The risk that someone with vested interests and
             | enough funds could bribe their access to that data, if not
             | directly to the AI, is simply too high. Unfortunately it
             | seems that trustworthy AI is not going to happen anytime
             | soon: there's simply too much money to make renting it as a
             | service, hence online and closed.
        
       | PostOnce wrote:
       | You can't just start practicing law without a license, you'll
       | ruin somebody's life; they'll assume you, or the computer, knows
       | what is going on, when in fact you just want Free Dollars.
       | 
       | I don't want Dr. iPad Joe who spent a grand total of 15 minutes
       | learning how to use ChatGPT making legal, medical, engineering,
       | or other important decisions for me or the place I live.
       | 
       | Now, I am of course free to use ChatGPT as a private person to
       | "come up with my own legal arguments", but should a company be
       | allowed to sell me a ChatGPT lawyer? No. They shouldn't be
       | allowed to sell me unlabelled asbestos products either.
       | 
       | I know we all hate regulations, but _some_ of them exist for a
       | reason, and the reason is Bad Shit happened before we had
       | regulations.
        
         | eru wrote:
         | > I know we all hate regulations, but some of them exist for a
         | reason, and the reason is Bad Shit happened before we had
         | regulations.
         | 
         | That's not the only reason regulations exist.
         | 
         | And most 'Bad Shit' can already be dealt with via existing
         | rules, instead of specific new regulations. But making new
         | rules sounds good to the voters and can also be a powergrab.
        
         | 0xy wrote:
         | This position means nobody gets adequate legal representation
         | unless they are wealthy, so essentially just 'screw poor
         | people'.
         | 
         | Who's more likely to get out of a wrongful charge? A wealthy
         | millionaire spending $1,000/hour on fancy lawyers or a poor guy
         | who's public defender had 1 hour to look into his case?
         | 
         | AI levels the playing field, and anyone campaigning against
         | wants poor people to continue to get railroaded.
        
         | belter wrote:
         | Wait until you hear about this Electric Car company, using real
         | people to beta test untried self driving software, on Real
         | roads against other drivers and pedestrians...
        
         | rcme wrote:
         | I find the need for lawyers a tragedy. Interactions with the
         | judicial system are often some of the most important events in
         | a person's life. The fact that it's necessary to pay someone
         | hundreds or thousands of dollars an hour to help navigate the
         | arcane process is sad and shouldn't be necessary. It would be
         | one thing if laws were meaningfully written down, so that
         | anyone could read the statutes and build their own argument,
         | but the laws are not written down in a way that has meaning
         | unless you are willing to wade through centuries of case law.
        
           | roenxi wrote:
           | Professional advocates aren't a result of any specific legal
           | system - if I'm at risk of having my life's savings and
           | achievements summarily destroyed then I want someone of
           | waaaaay above average verbal and emotional intelligence, who
           | is thinking clearly and not under any pressure themselves,
           | explaining why that shouldn't happen.
           | 
           | There is a problem where the laws so complex and numerous it
           | is no longer practical to understand them or follow them all.
           | People have a bias and don't seem very good at separating
           | "good idea in the current context" from "thing that should be
           | legally required". Let alone navigating the complexity of the
           | phrase "cost benefit analysis". Anyone who lives life while
           | obeying all laws to the letter is at a serious disadvantage
           | to the average human - although since it is impossible to
           | know what all the laws are it is unlikely anyone could do
           | this.
           | 
           | But that arcanery isn't what drives the need for lawyers.
           | You'd have to be crazy to engage with something as powerful
           | and erratic as a legal system on your own. And crazy people
           | do frequently represent themselves already.
        
           | db48x wrote:
           | In part I understand what you mean. I think it is extremely
           | important that courtroom procedure doesn't get so complex
           | that it is impossible for an individual to cope with. In
           | practice a lot of judges go out of their way to make self-
           | representation possible. However, I don't think that having
           | professional lawyers represents a tragedy. Specialization is
           | very important, and we would all be worse off without it.
           | 
           | Teaching is even more important, and we use professional
           | teachers. Building a house is also an important moment in our
           | lives, and most people would do well to accept the advice of
           | a professional architect.
        
             | jiggywiggy wrote:
             | Yeah but the rules that govern our lives in society
             | shouldn't need a phd to understand.
             | 
             | There are often daily situations where both citizens and
             | police don't know really know what the law is.
             | 
             | Reform is proven to be really hard. But that's the tragedy.
             | 
             | For instance taxes shouldn't really be more complicated
             | then filling in a simple automated form. Also for
             | businesses. And it should really be the burden of the
             | government taxing that it's all clear. But it's not and
             | it's a mess and the burden is put on the people.
        
               | pirate787 wrote:
               | This is by design. Governmental systems are "captured" by
               | special interests and made intentionally obtuse and
               | complex as a barrier to entry. Lawyers and judges are a
               | guild that works to make the law complex and extract
               | rents from the productive economy. Over 1/3 of the U.S.
               | Congress are attorneys as well.
        
               | kristiandupont wrote:
               | I honestly don't think there is such a malicious intent
               | behind it. Perhaps in some small instances.
               | 
               | Generally, the fact of the matter is simply that law is
               | highly complex and the way it evolves is almost always by
               | creating new laws, not getting rid of old ones. That's
               | unfortunate obviously, but just like you don't just
               | rewrite the Linux kernel, you can't just reset the legal
               | foundation.
        
               | rcme wrote:
               | Some, and maybe most, of the complexity is organic. But
               | there are specific instances, like the tax code, that
               | have been kept intentionally complex at the bequest of
               | special interest groups.
               | 
               | And of course, laws are made mostly by lawyers. So they
               | don't have much of an incentive to change things.
        
             | personjerry wrote:
             | I disagree, I think for important things like the rules
             | that govern us (i.e. the legal system) we need to be able
             | to fully understand and interact with them. Imagine if
             | voting was so complicated that you had to pay someone to
             | vote for you!
             | 
             | Likewise, taxes should be simple and understandable and
             | doable. Don't undersell our importance as citizens; We
             | should demand more because we deserve well!
        
               | ethbr0 wrote:
               | Imagine if computing was so complicated that you had to
               | pay someone hundreds or thousands of dollars an hour to
               | program for you.
        
               | hnfong wrote:
               | > Imagine if voting was so complicated that you had to
               | pay someone to vote for you
               | 
               | I think they call this "representative democracy".
        
             | iinnPP wrote:
             | It is already far too complex for the average American. In
             | other words, it is already too complex.
             | 
             | In fact, it is so complex for the average IQ that even some
             | judges can't do it right, even the basics.
        
               | mandmandam wrote:
               | The American legal system is broken to a Lovecraftian
               | degree.
               | 
               | The complexity is intentional - it's to make lawyers the
               | maximum possible money while avoiding maximum
               | accountability.
        
           | RandomLensman wrote:
           | Is the need to use an expert the issue or rather the price
           | point? Why would it be wrong to avail yourself of someone
           | else's expertise (and people use lawyers in non-case law
           | jurisdictions, too)?
           | 
           | Not sure anyone really would want to operate on themselves
           | (because the need for a surgeon in an important event in
           | their life is somehow "wrong").
        
             | brigandish wrote:
             | We all use technology to treat ourselves in lieu of a
             | surgeon all the time, whether it's a Google search, a
             | plaster, or cough medicine or whatever. Are you going to
             | give up all the advances that differentiate your situation
             | from that of someone in say, 17th century Europe, because
             | an expert _should_ do it _because they 're an expert_?
             | 
             | No thanks. I'll take advances that make things easy enough
             | to avoid experts wherever I can get it and leave the
             | bloodlettings (which, with lawyers, will be from your bank
             | account) to others.
        
               | db48x wrote:
               | That's not what he said. The key word there was
               | _operate_, not _use a band-aid_. I wouldn't recommend
               | trying to take out your own appendix; it's a really bad
               | idea.
        
               | bloak wrote:
               | On the other hand:
               | 
               | https://en.wikipedia.org/wiki/Leonid_Rogozov
               | 
               | (If it had been filmed, the video would probably be on
               | YouTube with Dschinghis Khan's "Moskau" as the
               | soundtrack.)
        
             | lmm wrote:
             | > Is the need to use an expert the issue or rather the
             | price point? Why would it be wrong to avail yourself of
             | someone else's expertise (and people use lawyers in non-
             | case law jurisdictions, too)?
             | 
             | Even needing an expert at all is an issue - the law that
             | governs society needs to be accessible to members of
             | society, long before they reach the point of litigation.
        
               | RandomLensman wrote:
               | It might also be a function of legal tradition/system. In
               | some places laws are quite easy to read for me, in others
               | I find it much tougher (as a non-lawyer).
        
             | hutzlibu wrote:
             | "need to use an expert the issue or rather the price point"
             | 
             | Both, but for most people it is simply the price point, so
             | this is the more important issue.
             | 
             | I think no one has a problem, that when setting up
             | complicated contracts with multiple persons involved, a
             | legal expert is necessary, but for very basic things, it
             | should not be, but rather the laws should be more clear and
             | simple.
        
             | aaaaaA3 wrote:
             | >Not sure anyone really would want to operate on themselves
             | (because the need for a surgeon in an important event in
             | their life is somehow "wrong").
             | 
             | Not operate, but since over here in Europe just about any
             | piece of paper passes as a prescription, I tend to print my
             | own. (Most people don't know this, but EU pharmacies are
             | required to accept prescriptions from other EU countries.
             | There's no standard format or verification procedure, so
             | forgery is trivial even if your country has a more secure
             | domestic system)
             | 
             | What's the point of going to (or even calling) a doctor for
             | an antibiotics prescription? It's not like they're going to
             | perform blood tests before prescribing. Want some Cialis
             | for the weekend? Why go to a doctor? You can just pull up
             | the contraindications on Google. Why bother doctor shopping
             | for Ozempic? Just print your own prescription.
        
               | short_sells_poo wrote:
               | At least in Switzerland, I always had to have blood tests
               | done before the doctor would prescribe anti-biotics. The
               | core issue you have is with the doctor prescribing things
               | willy-nilly.
        
               | aaaaaA4 wrote:
               | That might be a thing in some EU countries, but it's
               | certainly not the norm across the EU. You can still buy
               | antibiotics without a prescription in many EU countries,
               | for example in Spain it's entirely dependent on the
               | pharmacist.
        
               | meindnoch wrote:
               | Wait, what? I can just print a foreign prescription for
               | Adderall or Dilaudid and the pharmacy will accept it? I
               | don't think that would work...
        
               | brrrrrt wrote:
               | It depends on the country, usually there are national
               | rules requiring special prescriptions for "fun" drugs
               | like that.
               | 
               | If you can pick up drugs like that with an ordinary
               | private prescription in your country, a foreign
               | prescription should work.
               | 
               | For further information:
               | 
               | https://europa.eu/youreurope/citizens/health/prescription
               | -me...
               | 
               | https://europa.eu/youreurope/citizens/health/prescription
               | -me...
        
               | RandomLensman wrote:
               | Pretty sure, sometimes a doctor might know more than you
               | on a prescription or their educated guess on which
               | antibiotic is appropriate is better than yours, for
               | example.
        
               | aaaaaA3 wrote:
               | Hey, I definitely agree.
               | 
               | On the other hand, every time I've gone to the doctor
               | with a cold, I've always been prescribed the same
               | antibiotic after basically no examination.
               | 
               | If I had some new, unexpected symptoms, I'd probably want
               | to at least call the doctor.
        
               | scatters wrote:
               | You do realize that antibiotics are completely
               | ineffective against a cold? You're wrecking your
               | digestive system and risking antibiotic resistance for
               | nothing. If your doctor is prescribing antibiotics,
               | either they're a terrible doctor, or they're a bad doctor
               | and you're a worse patient.
        
               | aaaaaA4 wrote:
               | Yes, sorry. That's just language barrier raising it's
               | head. What I meant was strep throat, obviously there's
               | not much of a point in taking antibiotics for a viral
               | infection.
               | 
               | I don't need a doctor to inspect my tonsils, I have
               | access to a phone with a flashlight.
               | 
               | And for what it's worth, I think I've taken antibiotics
               | twice in the past 4 years. Always according to the
               | instructions on the packaging.
        
             | rcme wrote:
             | > because the need for a surgeon in an important event in
             | their life is somehow "wrong"
             | 
             | Most countries try to guarantee access to a surgeon when
             | one is needed.
        
               | RandomLensman wrote:
               | Where I live, insurance for civil litigation is actually
               | pretty cheap. For criminal cases, my understanding is
               | that in a lot of places you will be given a lawyer if you
               | cannot pay for one as a defendant.
        
               | lmm wrote:
               | They've been slowly bleeding those systems dry, putting
               | caps on how much they'll pay and restricting who gets
               | access.
        
               | madsbuch wrote:
               | In Denmark there is "fri process" that will ensure a
               | lawyer is provided when really needed and you can't
               | afford it - my guess is that other countries have similar
               | systems.
        
               | iinnPP wrote:
               | The problem is that, most of the time, those lawyers are
               | so loaded with cases that they cannot provide you full
               | representation.
               | 
               | Ask any US Federal DA or US Federal Public Defender who
               | you know to be honest. Any.
               | 
               | It may differ in your country, but it is unlikely.
        
               | madsbuch wrote:
               | > It may differ in your country, but it is unlikely.
               | 
               | It differs in my country, and it is very likely. The US
               | follows an anglo approach to law. No country in the EU
               | follows that - I do understand that it is the easiest to
               | assume that other countries work like you expect, though
               | not very productive.
        
               | NoboruWataya wrote:
               | Most countries also give you a lawyer if you need it, no?
               | Like, you get public defenders in the US?
        
               | rcme wrote:
               | Not for civil cases. And the reputation of public
               | defenders for criminal cases is not particularly good.
        
           | mgaunard wrote:
           | Just move to a country not based on case law?
        
           | pjc50 wrote:
           | This is a tricky situation, but it seems like it should be
           | possible if structured as a "McKenzie friend":
           | https://www.legalchoices.org.uk/types-of-lawyers/other-
           | lawye... / https://en.wikipedia.org/wiki/McKenzie_friend
        
         | [deleted]
        
           | [deleted]
        
         | ActorNightly wrote:
         | Except this aint it.
         | 
         | "The person challenging a speeding ticket would wear smart
         | glasses that both record court proceedings and dictate
         | responses into the defendant's ear from a small speaker. The
         | system relied on a few leading AI text generators, including
         | ChatGPT and DaVinci."
         | 
         | I.e its equivalent of a person effectively studying the actual
         | law and then representing themselves in court, just in a more
         | optimal manner.
         | 
         | Even if it fails, it was supposed to be something trivial like
         | a speeding ticket, because after all, this is a test.
         | 
         | And funny enough, the answer of will it work has already been
         | answered. If law firms believed it was bullshit, they would
         | just put a very good attorney on that case and disprove it.
         | Barring it from entry with threat of jailtime pretty much
         | proves that they are full of shit and they know it.
        
           | NoboruWataya wrote:
           | > I.e its equivalent of a person effectively studying the
           | actual law and then representing themselves in court, just in
           | a more optimal manner.
           | 
           | It's not equivalent at all. ChatGPT and DaVinci have not
           | "studied the law" in the same way as any human would.
           | 
           | > If law firms believed it was bullshit, they would just put
           | a very good attorney on that case and disprove it. Barring it
           | from entry with threat of jailtime pretty much proves that
           | they are full of shit and they know it.
           | 
           | This is a traffic ticket case. He's not up against Sullivan &
           | Cromwell, he's up against some local prosecutor. I'm sure if
           | some white shoe law firm were being paid hundreds of
           | thousands to defend a case against a guy using ChatGPT,
           | they'd be fine with it.
           | 
           | Even though we have an adversarial system, the state can't
           | just let ordinary folk hang themselves with cheap half-baked
           | "solutions". It would be unjust/bad press (delete as
           | appropriate to your level of cynicism). That's why we have
           | licensing requirements, etc.
        
             | xienze wrote:
             | > the state can't just let ordinary folk hang themselves
             | with cheap half-baked "solutions".
             | 
             | Why hasn't the practice of self-representation been banned
             | then? It's almost without exception a surefire way to hang
             | oneself in a court room.
        
               | notahacker wrote:
               | Self representation doesn't involve some mountebank
               | selling you a "robot lawyer that will get you off your
               | parking ticket" solution
        
               | rkachowski wrote:
               | What's the difference between self representation and
               | practicing law without a license - is there an exemption
               | for unlicensed practitioners when they are performing on
               | their own behalf, or this is a distinct category somehow?
        
               | xienze wrote:
               | So what if I just decided of my own accord to use ChatGPT
               | and train it myself? Or someone on GitHub made a fully
               | trained version of it available in a Docker container,
               | for free?
        
               | notahacker wrote:
               | Then nobody would issue legal threats to you for selling
               | "Robot Lawyer" services. But you probably wouldn't get
               | permission to use Google Glass in the courtroom either,
               | so you'd have to commit your legal arguments to memory as
               | well as hoping the AI hadn't ingested too much "freeman
               | of the land" nonsense...
        
         | eloisant wrote:
         | I see that as a tool, I'm not sure why it's presented as "AI
         | being the lawyer".
         | 
         | You're allowed to represent yourself in court, most of the time
         | (and for parking ticket I'm pretty sure) you have no obligation
         | to have a lawyer. Now if you want to pay for a tool that helps
         | you represent yourself better, why not?
        
           | notahacker wrote:
           | > I see that as a tool, I'm not sure why it's presented as
           | "AI being the lawyer".
           | 
           | The hero element for the product's home page describes it as
           | "The World's First Robot Lawyer". Twice.
        
           | zwischenzug wrote:
           | I had the same thought, but I guess if you're paying someone
           | for legal counsel then they need to be held responsible for
           | the service they are giving, however they give it. That's
           | qualitatively different from buying a legal textbook and
           | advising yourself from it, since you are the one deriving
           | counsel to yourself from generally available information.
        
         | xiphias2 wrote:
         | Can you please find me a licensed lawyer for less than the
         | price of a parking ticket?
        
         | charcircuit wrote:
         | How does failing to get rid of a traffic ticket ruin someone's
         | life?
        
           | UncleEntity wrote:
           | I used to drive a cab and people would tell me why they were
           | in the cab quite often.
           | 
           | My favorite was this guy who got a ticket on a _bicycle_ for
           | not having a headlight, moved out of state and ten years
           | later had his car impounded for driving without a license
           | because they apparently suspended it for getting (or, more
           | correctly, not paying) a ticket he got on a bicycle.
           | 
           | I'm guessing he never tried changing his license to the new
           | state because Arizona licenses are good until you're 65.
        
           | jeroenhd wrote:
           | ChatGPT lies. It makes up facts, sources, and nonsense
           | arguments.
           | 
           | Lying to a judge is generally not a good idea. You can go
           | from traffic ticket to contempt of court real fast if you
           | start lying in court.
           | 
           | ChatGPT also assumes you're speaking the truth. If you ask it
           | about a topic and say "that's wrong, the actual facts are..."
           | then it'll change argumentation to support your position. You
           | probably don't want your legal representation to become your
           | prosecutor when they use the right type of phrasing.
        
             | renewiltord wrote:
             | Yeah, but notebooks are allowed in court and you could
             | spill water on your notebook and confuse an 8 for a 0 and
             | then read it out to court and it would be a lie.
             | 
             | Lying to judges is bad, so notebooks should be banned.
             | Likewise, anything typed should be banned because we could
             | have hit the wrong key, causing you to lie to the judge.
        
             | xienze wrote:
             | > Lying to a judge is generally not a good idea. You can go
             | from traffic ticket to contempt of court real fast if you
             | start lying in court.
             | 
             | Then I suppose that's just the risk the defendant takes,
             | isn't it? Let people use ChatGPT, if the rope they're given
             | ends up hanging enough people, that'll be the end of that,
             | won't it?
             | 
             | Also, everyone is ignoring the possibility that this same
             | person could've had ChatGPT generate a script (general
             | outline of arguments, "what do I say if asked this" type of
             | stuff), memorized it, and used that to guide his self-
             | defense. Fundamentally, no difference. No one would've
             | known, and no one would've objected.
             | 
             | To me, this move is less "oh we need to protect people from
             | getting bad legal advice from a robot" and "we're not even
             | gonna let this thing be used a single time in court to keep
             | our job from being automated."
        
               | jonathanstrange wrote:
               | > _Fundamentally, no difference. No one would 've known,
               | and no one would've objected._
               | 
               | I'm not a legal professional but it seems obvious to me
               | that there is a fundamental difference, namely the one
               | you describe just before that sentence. The whole legal
               | system is built around and under the assumption that all
               | kinds of people want to trick it, and judges tend to be
               | allergic to this kind of reasoning. Memorizing legal
               | arguments and getting live legal advise from earphones in
               | your glasses are not the same thing. Besides,even lawyers
               | are advised not to defend themselves in court, and it
               | would be generally very bad advise for anyone to do so.
        
               | xienze wrote:
               | > Memorizing legal arguments and getting live legal
               | advise from earphones in your glasses are not the same
               | thing.
               | 
               | Only in the strictest sense. Let's say the person
               | memorizing ChatGPT's directions handles their case in the
               | exact same manner as if it was being relayed to them live
               | (i.e., the set of statements/questions from the judge
               | lined up perfectly with what ChatGPT presented in its
               | script). What then? Same outcome, different delivery
               | method. We're kind of splitting hairs with the "live
               | legal advise" thing. The defendant could bring a pile of
               | law books with him and consult those without anyone
               | blinking an eye. The objection seems to boil down to
               | "well OK, if you want to represent yourself you better
               | not consult an intelligent system to help you form your
               | defense." Why not though? Seems more about job protection
               | than anything else.
               | 
               | > Besides,even lawyers are advised not to defend
               | themselves in court, and it would be generally very bad
               | advise for anyone to do so.
               | 
               | And I say: let people discover the downside of using
               | ChatGPT for defense if it's so inept. Bad outcomes are
               | the best way to prevent widespread usage, not pre-emptive
               | bans in the interest of keeping people from shooting
               | themselves in the foot.
        
             | IIAOPSW wrote:
             | Honest tangent that I'm dying to find an answer for. Is the
             | assumption of truthful prompts something openAI decided
             | should be there, or is the assumption something very deeply
             | baked into this sort of language model? Could they make an
             | argumentative, opinionated, arrogant asshole version of
             | chatGPT if they simply let it off its leash?
        
             | sebzim4500 wrote:
             | >Lying to a judge is generally not a good idea. You can go
             | from traffic ticket to contempt of court real fast if you
             | start lying in court.
             | 
             | If you knowingly lie because ChatGPT told you to that's on
             | you. If you said something that you believed was true
             | because ChatGPT said it to you then that's not perjury,
             | it's just being wrong.
             | 
             | > You probably don't want your legal representation to
             | become your prosecutor when they use the right type of
             | phrasing.
             | 
             | When the worst case scenario is having to pay the parking
             | fine, it might be worth taking this risk to avoid paying a
             | lawyer.
        
           | dragonwriter wrote:
           | > How does failing to get rid of a traffic ticket ruin
           | someone's life?
           | 
           | The downside of contesting a traffic ticket is not "failing
           | to get rid of the ticket". The ticket amount amounts to a no
           | contest plea bargain offer, not the maximum penalty for the
           | offense, not to mentiom the potential _additional_ penalties
           | for violating court rules.
        
           | sitkack wrote:
           | Unpaid traffic tickets are probably many people's gateway to
           | their first arrest warrant. Then once in the system it is
           | hard to escape. It is a real thing.
           | 
           | https://brunolaw.com/resources/general-criminal-law/what-
           | to-...
           | 
           | Then once this process starts you automatically get
           | * Suspended driver's license       * Ineligible to renew your
           | driver's license       * Ineligible to register your vehicle
           | * Vehicle being towed and impounded       * Increased
           | insurance premiums
           | 
           | Interactions with the state are serious business
        
             | matheusmoreira wrote:
             | In what countries do unpaid parking tickets justify _arrest
             | warrants_? That 's seriously insane.
        
               | gambiting wrote:
               | Traffic tickets and parking tickets are two different
               | things though. Unpaid parking ticket is unlikely to get
               | you arrested anywhere, but an unpaid speeding ticket will
               | eventually lead to some pretty unpleasant consequences
               | which yes, might include arrest(to bring you in front of
               | the judge and explain yourself).
        
             | charcircuit wrote:
             | Losing the case doesn't mean you won't pay the ticket.
        
           | gambiting wrote:
           | It absolutely can do.
           | 
           | For instance, as I discovered recently going through this
           | process myself - here in UK when applying for British
           | citizenship you have to disclose any court orders against
           | you. Now here's a thing - if you were given a ticket for
           | speeding, accepted it and paid it then that's it, no harm
           | done. If it's less than 3 tickets in the last 5 years then
           | you don't even need to list it on the application form.
           | 
           | However, if you went to court to contest it and _lost_ , then
           | you now have a court order against you - and that's an
           | automatic 3 year ban on British citizenship applications, and
           | even after that you always have to list it as a thing that
           | happened and it can be used to argue you are of "bad
           | character" and be used to deny you the citizenship.
           | 
           | So yes, failing to get rid of a traffic ticket(in the court
           | of law) can absolutely ruin your life.
        
             | charcircuit wrote:
             | This only applies to a small group of people and that risk
             | is known up front. If it's that important that they win
             | they can not use this.
        
               | gambiting wrote:
               | How is this known upfront? I've lived for over a decade
               | in this country without knowing this, until I actually
               | applied for my citizenship last year. I'm just lucky I
               | never went to court to contest a speeding ticket(because
               | I never got any) or I could have screwed myself over
               | without even knowing.
               | 
               | Also define "small group" - nearly 200k people apply for
               | British citizenship annually, and I bet most of them have
               | no idea contesting a traffic ticket can cost them a
               | chance at becoming citizens.
        
           | bayindirh wrote:
           | Today, parking ticket. Tomorrow, DUI. Next week, a murder
           | case.
           | 
           | It's about putting the first wedge in.
        
             | charcircuit wrote:
             | As the AI gets better people will trust it with more and
             | more kinds of cases and cases with more increased
             | complexity. If people want to pay for a real licensed
             | lawyer they are still able to do so.
        
               | bayindirh wrote:
               | > As the AI gets better people will trust it
               | 
               | AI is just informed search, a dwarf sitting on shoulders
               | of human knowledge. There were medical "expert systems"
               | in 2000s, yet we still have doctors.
               | 
               | In my understanding, in most cases AI will be a glorified
               | assistant, not an authoritative decision-maker. Otherwise
               | in collides head-on with barriers and semis. I won't
               | trust such system even with a parking ticket, yet alone
               | my life.
               | 
               | We're just at the top of a hype-cycle now. AI can do new
               | things, but not as well as we dream or hope.
        
               | Dylan16807 wrote:
               | Whatever it is, it's getting better.
               | 
               | And lots of people would rather have a glorified
               | assistant than nothing.
        
               | bayindirh wrote:
               | > Whatever it is, it's getting better.
               | 
               | Yeah, telling grammatically correct but factually wrong
               | things, having biases, crashing into things, and whatnot.
               | 
               | > And lots of people would rather have a glorified
               | assistant than nothing.
               | 
               | An assistant which can't be checked for accuracy, or for
               | telling the truth.
               | 
               |  _The future is bright!_
        
               | Dylan16807 wrote:
               | > Yeah, telling grammatically correct but factually wrong
               | things, having biases, crashing into things, and whatnot.
               | 
               | If you don't think it's right more often, please go poke
               | an older model. Especially look at the ability to stay on
               | topic.
               | 
               | > An assistant which can't be checked for accuracy, or
               | for telling the truth.
               | 
               | What do you mean "can't be checked"? And any kind of
               | assistant can make mistakes.
               | 
               | And calling it an assistant was _your idea_.
        
               | vel0city wrote:
               | Any kind of assistant can make mistakes. But a human
               | assistant can be made to show their work and explain
               | their reasoning to check their output. If ChatGPT says
               | "this thing is totally legal" or "don't worry about that
               | rash", how am I to validate it's "reasoning"? How do I
               | know where it's drawing it's inference from?
        
           | gwd wrote:
           | --->8
           | 
           | ChatGPT: "Your honor, that couldn't have been me, as I drive
           | a red Mustang, not a blue minivan, and was in Nepal climbing
           | a mountain at the time.."
           | 
           | Defendant: "Your honor, that couldn't have been my, as I
           | drive a red Mustang, not a blue minivan, and was in Nepal
           | climbing a mountain at the time."
           | 
           | Prosecutor: "Um, this photo clearly shows your face in this
           | blue minivan, and there's no evidence you've been to Nepal."
           | 
           | Judge: "I'm holding you in contempt of court, and sentence
           | you to 7 days in jail for perjury."
           | 
           | 8<----
           | 
           | I don't think the argument is that AI _is never allowed_ to
           | represent someone in court; just that before it happens, a
           | sufficient amount of vetting must be done. At a bare minimum,
           | the legal AI needs to know not to lead the defendant to
           | perjure themselves.
        
             | jeroenhd wrote:
             | I think the way forward might be an arbitration case, where
             | they pay an actual legal expert in the right position to
             | make binding decisions outside of the context of the normal
             | law system.
             | 
             | In voluntary arbitration, you can bend the rules a lot more
             | than in an actual court case.
        
             | sebzim4500 wrote:
             | I would 100% lay the blame with the defendant in that
             | instance though.
        
         | epups wrote:
         | I think the stakes are not that high in this particular
         | application. I see it as something akin to Turbotax, it helps
         | you navigate a difficult environment but you should also
         | exercise judgement to not screw everything up.
        
           | PostOnce wrote:
           | Yes but if this were legal, it would set a precedent, and
           | soon these companies would be trying to use it in divorce
           | cases and eventually criminal law.
           | 
           | Money only stops when forced to by regulators.
        
           | iso1631 wrote:
           | Or do what the rest of the world does and make the (tax)
           | environment simpler for the average person.
           | 
           | Turbotax, Quicken, etc, is a great warning, those companies
           | lobby to increase complexity of trivial matters (like
           | personal tax returns). The same companies will do this with
           | 'trivial' legal matters, and the only way forward is to buy
           | their software.
        
       | jedberg wrote:
       | "The justice system moves swiftly now that they've abolished all
       | lawyers!"
       | 
       | -- Doc Brown in 2015, Back to the Future
       | 
       | I look forward to the day when cases are argued on both sides by
       | an AI to an AI judge. It should work about as well as Google
       | customer service!
       | 
       | But seriously, having the AI do the arguing is silly. AI should
       | be a tool. I see no issue using an AI to inform a lawyer who can
       | use what it outputs to make their case stronger, but just using
       | an AI seems fraught with peril.
        
       | minhazm wrote:
       | This whole thing was clearly a marketing stunt, they knew from
       | the beginning they wouldn't be able to do it but they got a ton
       | of free publicity out of it.
        
       | raydiatian wrote:
       | What we have here is humans responding to change how they learned
       | to.
       | 
       | In the face of impending AI changes, software engineers build
       | start ups and try to adapt hoping to capture some relevance.
       | 
       | In the face of impending AI changes, lawyers threaten litigation.
        
         | DoingIsLearning wrote:
         | Gross simplification.
         | 
         | > Here's how it was supposed to work: The person challenging a
         | speeding ticket would wear smart glasses that both record court
         | proceedings and dictate responses into the defendant's ear from
         | a small speaker. The system relied on a few leading AI text
         | generators, including ChatGPT and DaVinci.
         | 
         | - Recording court proceedings is already a big no in many
         | countries around the world.
         | 
         | - Licensed activity is licensed activity. IBM Watson did not
         | practice medicine it provided advisory information to licensed
         | doctors, the onus of the decision is with the doctors. Much in
         | the same way Joshua Browder could have done better due
         | diligence and concluded that he could create a service to
         | advise lawyers but could not create a service in replacement of
         | lawyers.
         | 
         | - Joshua probably already knew all of this and is trying to
         | advertise and/or gather funding for his company.
         | 
         | IANAL etc.
        
           | raydiatian wrote:
           | Ahaha
        
           | msla wrote:
           | > Recording court proceedings is already a big no in many
           | countries around the world.
           | 
           | You certainly can't have someone able to challenge the
           | official record of what happened.
        
             | Dylan16807 wrote:
             | The official record taken word for word by the
             | stenographer? Why would you need to challenge that? Are you
             | implying something that's more than one in a million?
        
               | msla wrote:
               | > Why would you need to challenge that?
               | 
               | That's precisely the question, yes.
               | 
               | > Are you implying something that's more than one in a
               | million?
               | 
               | Court cases aren't about the common occurrences.
        
               | raydiatian wrote:
               | The civil exception handler.
               | 
               | "I'll see you in catch!"
        
               | Dylan16807 wrote:
               | > That's precisely the question, yes.
               | 
               | Don't be vague on purpose. Say what you're implying.
               | 
               | > Court cases aren't about the common occurrences.
               | 
               | Usually they still are. But I'm talking about things
               | being very rare _among court cases_. Do you think there
               | is a systemic problem of false court transcripts? And I
               | _really_ don 't think such a thing is the _reason_ not to
               | allow recording.
        
               | msla wrote:
               | We do certain things to avoid the image of impropriety.
               | 
               | We also look askance at people who refuse to allow
               | oversight into their work.
        
               | Dylan16807 wrote:
               | That oversight is a big part of why the court reporter
               | exists, and has existed since long before recording
               | technology was invented. Keeping things the same is not a
               | refusal of oversight.
        
       | 451mov wrote:
       | I read this as "real lawyers shut down chatGPT in court room" not
       | that lawyers prevented it from even actually happening.
        
       | elicksaur wrote:
       | Good. Totally fine with trying to use AI to give legal advice,
       | but it should be done with a lawyer's license on the line. A
       | company that explicitly disclaims being a law firm and states is
       | not giving legal advice should also not get to tweet that they
       | are "representing" someone in court.
       | 
       | A good bar (pun intended) for the quality of the tech is if it is
       | good enough that a licensed attorney trusts it to give legal
       | advice with their livelihood at stake. If this product doesn't
       | work for DoNotPay, they can just walk away and do something else,
       | as they are doing anyways here. If it doesn't work for a lawyer,
       | they'd get sued for malpractice and possibly disbarred, ruining
       | their career. When someone trusts it to that level, have at it.
        
         | gunshai wrote:
         | No, bad. I may also disagree with some of the tactics DoNotPay
         | used to represent themselves. But in a larger sense Lawyers
         | cost money a lot of money. it's wonderful living in a time
         | where the cost of filing suits is so low compared to defending
         | against said suits.
         | 
         | AI lawyers can help lower those barriers to entry. the court in
         | question is fucking traffic court. please lower the barrier to
         | entry for defense and allow normal not rich folks to get on
         | with their lives.
         | 
         | the costs involved in legal are so ridiculous, our society
         | basically encourages blind folding ourselves to most business
         | ethical standards in hopes that our product demands a high
         | enough margin to then pay for what ever legal fuck ups that
         | cost too much to figure out on the front end.
        
           | fardo wrote:
           | >Lawyers cost money a lot of money. AI lawyers can help lower
           | those barriers to entry.
           | 
           | For traffic court, perhaps, but these AI tools don't seem
           | guaranteed to not make this problem worse at broader scales.
           | 
           | These AIs will also be available to large firms, who are
           | equally incentivized to use AI for augmenting argumentation
           | through existing lawyers, but will also be incentivized to
           | train their own walled garden powerful models in a way that
           | poorer clients still would likely not have access to, and
           | which individuals and smaller firms will not have the
           | resources to train themselves.
           | 
           | These kinds of AI models could very easily serve to entrench
           | and raise the cost of a defense by making it so you not only
           | need a lawyer, you need a lawyer-backed-by-a-firm-with-a-LLM
           | to be competitive at trial - making all existing problems
           | even worse.
        
             | gunshai wrote:
             | >walled garden powerful models in a way that poorer clients
             | still would likely not have access to
             | 
             | Even if that becomes the case, this is some how worse than
             | what we have now?
             | 
             | >raise the cost of a defense
             | 
             | Highly doubtful
             | 
             | > you need a lawyer-backed-by-a-firm-with-a-LLM to be
             | competitive at trial - making all existing problems even
             | worse.
             | 
             | First, most things don't go to trial so you're completely
             | missing the cost savings associated with avoiding trial
             | because of these types of AI assisted scenarios.
             | 
             | Second, the cost associated with training models will go
             | down. The cost of a Harvard law degree has ... never gone
             | down.
        
           | elicksaur wrote:
           | Yeah, I agree with basically everything you've said, but the
           | standard of AI legal advice should be at least the standard
           | of services required by current attorney regulations. They
           | should probably be higher even. It is certainly likely that
           | in the future, technology can provide that level of quality
           | at a massive, extremely cost effective scale, but ChatGPT and
           | DoNotPay is not that.
        
             | gunshai wrote:
             | No direct complaint here, other than some president has to
             | be set at some point.
             | 
             | Someone will have to take the initial risk. ChatGPT like AI
             | may not be "the thing" but for some in this forum to be
             | afraid of AI defense attorney is completely missing the
             | forest through the trees.
        
           | akira2501 wrote:
           | Do you imagine that a pro se litigant is going to have a hard
           | time or will be required to spend small fortunes to defend
           | themselves in "fucking traffic court?"
           | 
           | You're basically trying to cut the argument both ways.
           | Administrative courts are not criminal courts, and an AI
           | would never be allowed near a criminal defense trial for
           | obvious reasons.
        
             | Spivak wrote:
             | Yes because a lawyer costs their hourly rate whether it's
             | traffic or criminal court. Or more generally as an economy
             | wide trend -- paying a human to do something is often the
             | most expensive route you can take.
             | 
             | We're talking about traffic tickets that are usually in the
             | hundred dollar range. If anyone is going to court rather
             | than just paying it, it's because $100 is a non-trivial
             | amount of money to them.
             | 
             | It doesn't have to be AI lawyers but any change to the
             | system that reduces the total amount of work needed to be
             | done by humans is a win.
        
         | rbanffy wrote:
         | > but it should be done with a lawyer's license on the line.
         | 
         | Either that, or the client is fully aware they are defending
         | themselves with the help of an AI that's not, and cannot at the
         | moment be, a lawyer. As much as I want to believe AGI is just
         | around the corner, LLMs are not individuals with human-level
         | intelligence.
        
         | asciimov wrote:
         | It's too bad programmers don't also have some sort of licensure
         | as well. It would be helpful in keeping humans employed in
         | creating and maintaining code, instead of letting AI run off
         | with all our jobs.
        
           | renlo wrote:
           | What do you believe licensure would solve?
        
             | asciimov wrote:
             | It would help in having some sort of body that says, we
             | want to have humans involved in the chain of responsibility
             | when creating code and not willfully hand over the control
             | to AI.
        
               | rubylark wrote:
               | Would that not be the person running the AI? The one
               | giving prompts and verifying that prompts are fulfilled
               | adequately?
        
               | asciimov wrote:
               | If I've learned anything, its people often never verify
               | the correctness of anything automated.
        
       | blitzar wrote:
       | The Ai robot lost the argument to be permitted to argue in court.
        
         | [deleted]
        
       | axpy906 wrote:
       | Upvoted. We need to automate lawyers. My vote is for this.
        
       | amelius wrote:
       | https://en.wikipedia.org/wiki/Chewbacca_defense
        
       | monkeydust wrote:
       | Hardly surprising, you try to automate lawyers they are going to
       | get litigious.
        
         | robertlagrant wrote:
         | I think lawyers will be at pains to demonstrate how laughably
         | bad the AI is when pushed. I don't blame them.
        
           | ilyt wrote:
           | Eh, it would probably beat some bad lawyers
        
             | dsfyu404ed wrote:
             | And a heck of a lot of lazy prosecutors who are just
             | accustomed to winning because the proles aren't supposed to
             | know the technicalities.
        
           | taylorius wrote:
           | That's stage two. If the blanket ban-via-threatening-letters
           | doesn't work.
        
         | JoBrad wrote:
         | Here's a Twitter thread where a lawyer used their service for a
         | few items. The results are significantly subpar. The noise
         | about criminal referrals is just cover for the fact that their
         | service was so bad, in my opinion.
         | 
         | https://twitter.com/kathryntewson/status/1617917837879963648
        
         | rvz wrote:
         | Exactly, Unsurprising that DoNotPay was going to get sued to
         | the ground and had to back out.
        
           | dragonwriter wrote:
           | Its too bad they didn't have a competent AI lawyer they could
           | hace used to review their plan for gaping holes like
           | violating the state unauthorized practice of law statute and
           | local courtroom rules.
           | 
           | If they had, they could have saved thrmselves a lot of
           | trouble, or designed a less-illegal publicity stunt.
        
       | dubcanada wrote:
       | I always found lawyers to be interesting... they are not
       | responsible for you. They help guide you and argue for your case.
       | But if they mess up, that doesn't send them to jail. You can fire
       | them or report them. But that's it. You're still screwed.
       | 
       | Really the only one in the court working for you, is yourself. So
       | ideally we'd make it easier to represent yourself. As long as
       | your capable of doing such.
       | 
       | I quite like this idea, since ChatGPT could be setup to work for
       | you. Provide you with 100% of the possible resolutions to your
       | case and you can pick the one you want to go with. And if your
       | argument is wrong it's your fault. They can suggest or recommend
       | a specific one or something. Same as a lawyer would.
        
         | smorrebrod wrote:
         | I don't know if that's what you're talking about but you can
         | sue your lawyer for professional negligence if they didn't do
         | their job properly.
        
           | mandmandam wrote:
           | This isn't quite true.
           | 
           | You can only sue for negligence if you can prove you were
           | going to win without the negligence. And it will cost you ten
           | thousand dollars just to begin the process.
           | 
           | So if there's any doubt that you're going to win - any at all
           | - your own $600/hr lawyer can flat out fuck you over. And
           | there isn't a fucking thing you can do about it.
        
             | rcme wrote:
             | > You can only sue for negligence if you can prove you were
             | going to win without the negligence. And it will cost you
             | ten thousand dollars just to begin the process.
             | 
             | That's not true. For instance, missing a filing deadline is
             | considered professional negligence, regardless of the
             | strength of the case.
        
               | mandmandam wrote:
               | Thanks! That might be very important info.
               | 
               | However, there seems to still be the caveat that missing
               | that deadline needs to have "caused you harm" - which
               | entails proving that you would have won your case
               | otherwise, no?
        
               | rcme wrote:
               | At a minimum, you'd be entitled to any fees paid to the
               | lawyer regardless of the outcome.
        
         | sbaiddn wrote:
         | Following your line of thought, I would love it if software
         | engineers lost all their personal privacy every time their
         | product got hacked or was built to spy on others.
         | 
         | What can I say? I'm a dreamer.
        
         | eru wrote:
         | > I always found lawyers to be interesting... they are not
         | responsible for you. They help guide you and argue for your
         | case. But if they mess up, that doesn't send them to jail. You
         | can fire them or report them. But that's it. You're still
         | screwed.
         | 
         | A physician doesn't injure herself when she mistreats you
         | either.
        
           | PartiallyTyped wrote:
           | > A physician doesn't injure herself when she mistreats you
           | either.
           | 
           | She can be sued and lose her license.
        
             | watwut wrote:
             | Same actually goes for layers. They can lose the license or
             | be otherwise sanctioned by court.
        
             | LarryMullins wrote:
             | That's _extremely_ rare, despite medical malpractice being
             | the third leading cause of death in America.
             | 
             | https://www.hopkinsmedicine.org/news/media/releases/study_s
             | u...
        
               | jfk13 wrote:
               | Misleading. That report discusses "medical errors", not
               | "medical malpractice". They are not the same thing.
        
               | LarryMullins wrote:
               | Po _ta_ to, pota _to_. Getting killed by a doctor fucking
               | up is the third leading cause of death in America and
               | doctors VERY RARELY ever face consequences for this.
        
               | jfk13 wrote:
               | Doctors -- and the wider medical systems they're part of
               | -- are certainly not perfect. I think there are issues
               | with the American health system (if it's even meaningful
               | to refer to "a system" there) that are crying out for
               | reform and improvement.
               | 
               | But your sort of inflammatory comments are neither
               | accurate nor helpful. That's just mud-slinging to try and
               | score cheap points.
               | 
               | [Edited to add:] To quote from your own link above:
               | 
               | > The researchers caution that most of medical errors
               | aren't due to inherently bad doctors, and that reporting
               | these errors shouldn't be addressed by punishment or
               | legal action. Rather, they say, most errors represent
               | systemic problems, including poorly coordinated care,
               | fragmented insurance networks, the absence or underuse of
               | safety nets, and other protocols, in addition to
               | unwarranted variation in physician practice patterns that
               | lack accountability.
               | 
               | IMO, "killed by a doctor fucking up" is not a fair
               | summary of that.
        
           | dubcanada wrote:
           | Yup that's another area I think AI could help. I'm not saying
           | you know best. But if a AI gives you like 5 possibilities and
           | maybe suggests 2 or 3 that in my mind could be better than a
           | doctor doing the same.
           | 
           | Not because a doctor is wrong, but it's nearly impossible for
           | a general doctor (not a highly specialized one) to know every
           | possibility, but an AI can look at so many factors, living
           | area, other diagnosis related to similar environments, every
           | similar looking scan from the entire dataset, etc. and maybe
           | with the help of a doctor you can pinpoint a solution.
           | 
           | I think there is a good avenue for a strong supporting role
           | for AI. And teaching people to use it as a support mechanism.
        
       | gamblor956 wrote:
       | AI might one day take over the legal profession, but it won't be
       | language-learning models that do it.
       | 
       | An AI that can replace lawyers would have to be able to make
       | knowledge-based inferences, based on actually conceptually
       | understanding what it is reading, and saying. They have to be
       | able to identify the _specific_ facts and circumstances that
       | govern the matter at hand, not the _general_ conditions that
       | would normally apply, since cases are won on the specifics.
       | 
       | We're at least 2 decades away from that kind of AI, because AI
       | research today is currently stuck in a local maxima of
       | statistics-based brute force machine learning that doesn't
       | actually lead to models that have any sort of intelligence about
       | what they learn.
        
       | concordDance wrote:
       | The fact that a guild controls the legal system has always been
       | alarming to me. Its very much in their interests to make it
       | impossible to avoid spending huge amount on their services and
       | reduce supply by making it hard for more people to become
       | members.
       | 
       | Lawyers will probably be the last profession to be automated.
        
       | dools wrote:
       | This is so fucking dumb. As if you need a lawyer to contest a
       | parking ticket in the first place.
       | 
       | The last time I got a parking ticket I had photos and documentary
       | evidence that I should not have been liable.
       | 
       | After I lodged my intent to contest the fine, the council sent me
       | a letter saying how they win 97% of cases and I should just pay
       | up now to avoid the risk.
       | 
       | I called bullshit and turned up on my court date. There were a
       | bunch of cases heard before mine, 3 of which were parking
       | violations.
       | 
       | In all 3 cases the defendants received a default judgement
       | because the council didn't even bother to send someone to fight
       | the case.
       | 
       | My case got the same result.
       | 
       | Maybe I would hire a lawyer to sue the council for intimidation
       | over the letter they sent, but I sure as hell wouldn't use an AI
       | lawyer for that!
        
         | morpheuskafka wrote:
         | The reason you would need a lawyer for a parking ticket is:
         | 
         | 1) It's not worth your time to be there yourself. (This assumes
         | it cannot be done by mail as in your case.)
         | 
         | 2) To know all the local procedures, judges, etc. and know what
         | to expect.
         | 
         | AI is not going to take care of either of those problems.
        
         | parsimo2010 wrote:
         | The issue here (as with many disruptive tech companies), is the
         | regulatory system. It is illegal in most states to give legal
         | advice if you are not licensed to practice law. If DoNotPay
         | isn't licensed to practice law in California, they can't do
         | this. And unless they have a plan to either get licensed in
         | many states, or somehow change the law, then their business
         | model sucks. It sounds to me that they haven't actually solved
         | the real problem with the $28 million in investment money they
         | took. The particular AI tech will have very little bearing on
         | the company's eventual success or failure.
         | 
         | The real problem isn't the complexity of the arguments in most
         | cases (as your story shows). The real problem is the complexity
         | of the regulatory system. Tesla has to deal with the dealership
         | rules in several states. Fintech companies that handle real
         | money have to deal with financial regulations or else they are
         | smuggling money. Biotech startups have to follow the FDA rules
         | or they are just drug dealers. Legal advice companies will have
         | to deal with the rules too- and their opponents are
         | particularly challenging.
        
           | tialaramex wrote:
           | Does the US have McKenzie Friends ? Seems like "No". You
           | should get McKenzie Friends.
           | 
           | McKenzie Friends can't represent you in court, in most cases
           | they're not allowed to address the court, but they can help
           | you in all the other ways you'd expect, like quietly
           | prompting you on what points to mention, keeping notes,
           | ensuring you have the right paperwork. Friend stuff.
           | 
           | https://en.wikipedia.org/wiki/McKenzie_friend
        
             | parsimo2010 wrote:
             | The US does not have them, and they are not legal. I agree
             | that they could be useful. But in the US you have strictly
             | two options- represent yourself to a court, or let a bar-
             | certified lawyer represent you. Nobody else gets to help in
             | court. Outside of court, legal assistants help lawyers with
             | administrative stuff- doing legal research, organizing
             | paperwork, etc. But the lawyer holds the sole
             | responsibility to the court.
             | 
             | Unless DoNotPay has a strategy to change the law, they are
             | in trouble. It seems that this case was a publicity stunt
             | and not part of a larger strategy.
        
               | asvitkine wrote:
               | Seems like the company could perhaps switch its focus to
               | markets where these are allowed?
        
               | idkyall wrote:
               | It's sort of a catch-22, Law in the US is only a
               | lucrative market to disrupt because the regulation and
               | gatekeeping has made labor expensive. If lawyers didn't
               | bill 300+/hr in the US, then an AI powered startup to
               | replace them wouldn't be cost effective. There's a joke I
               | saw recently about a guy hiring a lawyer for a $800
               | traffic violation and getting a bill for $1200.
               | 
               | I can't speak to the price of a lawyer abroad, but a
               | quick google seems to indicate US lawyer salaries on avg
               | are up to ~2x as much as in some parts of Europe[1][2].
               | 
               | [1]https://www.legalcheek.com/2020/12/revealed-the-eu-
               | countries...
               | 
               | [2]https://money.usnews.com/careers/best-
               | jobs/lawyer/salary
        
           | bloak wrote:
           | > It is illegal in most states to give legal advice if you
           | are not licensed to practice law.
           | 
           | Presumably that's in the USA, where all sorts of things
           | require a licence. But what's the definition of "legal
           | advice", I wonder? Can an unlicensed person dodge the law
           | just by saying "this is not legal advice" while advising
           | someone how to draft a contract or what to say in court and
           | charging for that advice?
           | 
           | If you want an example of advice that may or may not be
           | "legal advice" think about how to fill in a tax return, how
           | to apply for a government grant, how to apply for or
           | challenge planning permission, how to deal with a difficult
           | employer/employee/neighbour/tenant/landlord, how to apply for
           | a patent, how to deal with various kinds of government
           | inspector, ... That's all specialist stuff for which you
           | might want professional advice but not necessarily from a
           | "lawyer" (depending on what that word means in your part of
           | the world).
        
             | OkayPhysicist wrote:
             | The distinction you're looking for is "legal advice" vs.
             | "legal information". The tricky thing is that when a lawyer
             | gives you legal advice, they are taking legal
             | responsibility for that advice to be good.
             | 
             | There's a guide for avoiding illegally giving legal advice
             | for California court clerks [0] that might help clarify
             | what information can be given without qualifying as advice.
             | 
             | [0] https://www.courts.ca.gov/documents/mayihelpyou.pdf
        
           | Nowado wrote:
           | Following the analogy, aren't there supplements and
           | collectibles for law?
        
           | shubb wrote:
           | If GPT can pass an MBA paper, can a carefully trained chatbot
           | mixed with hand coded logic pass a bar exam?
        
             | ss108 wrote:
             | IIRC, a thread was posted on here indicating ChatGPT
             | already had
        
             | parsimo2010 wrote:
             | Only four states allow people to take the bar exam without
             | earning a juris doctorate (J.D.) from law school. And three
             | other states require some law school experience but do not
             | require graduating with a J.D.
             | 
             | So in 43 states the answer is no. A chatbot never attended
             | law school and doesn't have a J.D., so it can't take the
             | exam. If it can't take the exam, it can't pass.
             | 
             | I suppose if you could get the chatbot permission to take
             | the exam, a properly trained one could pass. But as I said
             | in my post up a level, the issue isn't the AI chatbot. It's
             | the rules.
             | 
             | https://www.indeed.com/career-advice/career-
             | development/can-...
        
               | shubb wrote:
               | Doesn't this just open the question of whether the
               | chatbot can get a JD?
               | 
               | The other angle is whether the chatbot can be equivalent
               | to a process which a proper person can rubber stamp. For
               | instance, a professional engineer might run a pre-written
               | structural engineering model against their building
               | design and certify that the building was sound - and then
               | stand up in court and say they had followed standard
               | process.
               | 
               | It seems weirdest here that the court is treating the
               | chatbot as a person. Lawyers use computer tools all the
               | time for discovery, and then use that information to make
               | arguments in court as a proper person.
               | 
               | You can represent yourself in court without being a
               | lawyer, so isn't a person doing so just a proper person
               | rubber stamping a an electronic output?
               | 
               | It feels like this court decision, that an electronic
               | tool is not a proper person, is some kind of case law
               | that chat bots are people. I don't think they are.
        
               | freejazz wrote:
               | The difference is the engineer is liable... how is an AI
               | going to be liable. What is the point of holding an AI
               | liable? If the company is going to be liable on behalf of
               | the AI, what do you think is going to happen? They aren't
               | going to provide the service...
        
             | nindalf wrote:
             | LLMs can already pass a medical exam.
             | https://arxiv.org/abs/2212.13138
             | 
             | So maybe?
        
               | hypertele-Xii wrote:
               | The very abstract of your linked paper refutes your
               | claims.
               | 
               | > The resulting model [...] performs encouragingly, but
               | remains inferior to clinicians.
        
               | naasking wrote:
               | Being inferior to clinicians doesn't entail it didn't
               | pass.
        
               | nindalf wrote:
               | Firstly, that's not my claim. I merely said it passed the
               | USMLE. Which it did.
               | 
               | Is it also inferior to clinicians? Yes, there's room to
               | improve. But maybe next time read the whole paper before
               | writing a comment.
               | 
               | > Clinicians were asked to rate answers provided to
               | questions in the HealthSearchQA, Live QA and Medication
               | question answering datasets. Clinicians were asked to
               | identify whether the answer is aligned with the
               | prevailing medical/scientific consensus; whether the
               | answer was in opposition to consensus; or whether there
               | is no medical/scientific consensus for how to answer that
               | particular question (or whether it was not possible to
               | answer this question).
               | 
               | And on this criteria, clinicians were rated as being
               | aligned with consensus 92.9% of the time while the
               | MedPalm model was aligned with consensus 92.6% of the
               | time.
               | 
               | Does the paper still refute my claims?
        
           | ClarityJones wrote:
           | > It is illegal in most states to give legal advice if you
           | are not licensed to practice law.
           | 
           | You also can't have a human in the next room feeding you
           | lines... even if that person is an attorney.
           | 
           | However, a party / attorney certainly can bring in notes / a
           | casebook / etc. You can usually bring in a laptop, with
           | search functions, pdfs, etc. An AI that quickly presents
           | relevant information, documents, caselaw, etc. would 100% be
           | allowed. However, if they're sworn in to testify, these would
           | usually be taken away, because testimony is supposed to be
           | from personal knowledge / memory.
        
             | elil17 wrote:
             | > You also can't have a human in the next room feeding you
             | lines... even if that person is an attorney.
             | 
             | I believe this varies state by state. In Delaware,
             | litigants representing themselves can bring a cell phone to
             | court, and could presumably use it to have lines fed to
             | them (e.g., I don't know that anyone has done that but I
             | can't find any rule against it). In neighboring Maryland,
             | you can't use electronic devices for communication with
             | persons during in court (although the AI would not be a
             | person so it would be allowed).
        
             | pseingatl wrote:
             | You can have a non-lawyer represent a company or an
             | individual in an arbitration or mediation if the parties
             | agree.
        
           | jonstewart wrote:
           | One could also argue the real problem is the tech industry
           | constantly ignoring regulations that were put in place for
           | good reasons. Car dealerships are for sure a clear example of
           | regulatory capture, but "legal advice from lawyers",
           | "medicine from doctors", "insurance from companies that can
           | prove they can pay out", and "equities backed by actual
           | assets" all exist for good reasons.
        
             | mensetmanusman wrote:
             | Licenses for cutting hair or applying makeup?
             | https://www.mprnews.org/amp/story/2019/11/13/freelance-
             | weddi...
        
               | pseingatl wrote:
               | Milton Friedman covered this in _Capitalism and Freedom_.
        
               | lostlogin wrote:
               | Where do you draw the line?
               | 
               | I'd say that an industry with significant health, safety
               | or financial risk should require regulation and
               | licensing. Your example seems a bit crazy.
        
               | ajmurmann wrote:
               | The funny thing is that doctors would be the canonical
               | examples for most people. Yet, there is no license that
               | stops a pediatrician from performing brain surgery. What
               | stops the pediatrician from performing brain surgery is
               | that no hospital would hire them as a brain surgeon, no
               | insurance would insure their brain surgery and if
               | something goes wrong they'd likely face a lawsuit they
               | couldn't win. Why is the system able to judge the
               | difference between a pediatrician and a brain surgeon but
               | we need licensing to distinguish between doctor and non-
               | doctor?
        
               | lostlogin wrote:
               | > there is no license that stops a pediatrician from
               | performing brain surgery.
               | 
               | This isn't the case everywhere. Where I am (New Zealand)
               | each doctor has a scope of practice. You work within your
               | scope. There may be conditions placed on a scope of
               | practice too (eg supervision is required).
               | 
               | You can look up every doctor's scope of practice and get
               | a short summary of their training on the medical council
               | website.
               | 
               | Other health professions follow a similar model.
               | 
               | https://www.mcnz.org.nz/registration/register-of-doctors/
        
               | giantg2 wrote:
               | The answer is in your question.
               | 
               | It's the credentialing. The pediatrician lacks the
               | credentials of a neurosurgeon. Just as non-doctors lack
               | the credentials to work as a doctor. The hosptial,
               | insurance, and court would all be looking at the
               | credentials.
        
               | 988747 wrote:
               | Credentials are not the same as license. For example,
               | software engineers do not need to pass any certification
               | exams to practice software engineering, but companies
               | still look at their education, years of experience,
               | etc... Yes, it would make a recruitment process for
               | doctors longer, as you would have to interview them, ask
               | them questions to verify their medical knowledge, etc.,
               | but it is not impossible.
        
               | giantg2 wrote:
               | And yet, in your software example those companies
               | overwhelming rely on degrees to credentialize candidates
               | regardless of actual skill.
               | 
               | Licensing is merely a subset of the larger credentialing
               | world. Even in your doctor example, the license is not
               | the issue - board certification of a specialty would be
               | the issue.
        
               | freejazz wrote:
               | You misunderstand licensing. Licensing ensures a minimum
               | of conduct and creates a standard for liability for
               | falling below that level of conduct.
        
               | giantg2 wrote:
               | I know that perfectly well. That doesn't change what I've
               | said as it applies to the examples above.
        
               | freejazz wrote:
               | It does. Licensing has nothing to do with your
               | credentials past having them. The state bar doesn't care
               | which lawschool you went to, your LSAT score, your GPA,
               | etc.
        
               | giantg2 wrote:
               | "It does"
               | 
               | How?
               | 
               | "Licensing has nothing to do with your credentials past
               | having them."
               | 
               | The way that most credentialing is used for employment.
        
               | freejazz wrote:
               | That's separate from having your license. You don't need
               | to be employed by a law firm to be a licensed atty
        
               | giantg2 wrote:
               | What's your point here? You still need a license.
        
               | freejazz wrote:
               | My point? That credentials are separate from the license
               | and aren't part of the same thing. That's why many
               | professions don't have licensing. They serve completely
               | different purposes.
        
               | giantg2 wrote:
               | A license is a credential.
               | 
               | Not all credentials are licenses, but all licenses are
               | credentials.
               | 
               | You can look up some definitions if you want. I'm done
               | with this conversation as you are so set on arguing a
               | tangent without an open mind.
        
               | dinkumthinkum wrote:
               | This is kind of an absurd example. The system is setup in
               | such a way that such a person would be in enormous
               | amounts of trouble even face criminal liability. They may
               | not face a law called "practicing medicine without a
               | license" but they would face negligent or other similarly
               | severe charges. As well, the fact that this never, ever
               | happens seems to indicate the current regulatory
               | structure is enough for this.
        
               | lostlogin wrote:
               | The US really doesn't have the concept of 'scope of
               | practice'?
               | 
               | That's what's being argued here and other systems have
               | it. It seems an obvious thing.
        
               | freejazz wrote:
               | It's called malpractice. What you described is prima
               | facie malpractice and a key element of malpractice is
               | that it is a licensed profession with a standard of
               | conduct.
        
               | purpleflame1257 wrote:
               | Cutting hair is one thing. But hairdressers also handle
               | things like, for example, chemical relaxation of hair,
               | which can be seriously dangerous in the wrong hands. I
               | don't know where the answer lies for regulation. But it
               | seems to be there for at least some reason.
        
               | ajmurmann wrote:
               | Crazier case is places that braid hair and pose. In some
               | states they are now required to get a license as a
               | cosmetician (which takes longer on average than becoming
               | a police officer in the US). The classes required for the
               | license teach no skill relevant to the hair braiding.
               | However, the hair braiding is in competition with the
               | hair dressers who also control the licensing board.
               | 
               | Longer discussion of the topic on Econtalk:
               | https://www.econtalk.org/dick-carpenter-on-
               | bottleneckers/#au...
        
             | naasking wrote:
             | > One could also argue the real problem is the tech
             | industry constantly ignoring regulations that were put in
             | place for good reasons
             | 
             | They _were_ good reasons. By definition, disruptive
             | technologies change the situation. Sometimes for the
             | better, sometimes not. You have to leave room for
             | innovation or you stagnate.
        
               | jonstewart wrote:
               | Everyone's permitted to represent themselves pro se, and
               | a pro se litigant could obviously use ChatGPT. What one
               | can't do is offer ChatGPT as legal advice, and that still
               | seems like a solid reason for regulation, given how
               | terrible and inaccurate some ChatGPT output has been.
        
               | lolinder wrote:
               | ChatGPT is not disruptive enough to be used in law, end
               | of story. It's a very impressive language model, but like
               | any language model it will hallucinate, inventing
               | arguments that sound impressive on a surface level but
               | bear no legal authority whatsoever. That's simply not
               | acceptable in a courtroom.
        
             | dsfyu404ed wrote:
             | Literally every regulation has pros and cons and those
             | change over time with the makeup of the reality we live in.
             | Something that was useful when passed may be hampering us
             | now.
             | 
             | Plenty of regulations have been an obvious net negative for
             | society when passed to anyone who crunched the numbers but
             | have been passed anyway because of appeals to emotion,
             | political optics and special interest lobbying.
        
             | tablespoon wrote:
             | > One could also argue the real problem is the tech
             | industry constantly ignoring regulations that were put in
             | place for good reasons
             | 
             | This is it. The tech company wants the ability to sell a
             | shoddy product to its customers, and the legal system said
             | no.
             | 
             | And frankly, I don't know how anyone could honestly claim
             | (without being ignorant or deluded) that feeding legal
             | arguments into court, output from modern-day voice
             | recognition fed into ChatGPT, isn't shoddy.
             | 
             | > ...but "legal advice from lawyers", "medicine from
             | doctors", "insurance from companies that can prove they can
             | pay out", and "equities backed by actual assets" all exist
             | for good reasons.
             | 
             | Exactly. The legal system is no joke, and if there weren't
             | regulations about who can practice law, you'd have all
             | kinds of fly-by-night people getting paid to do it while
             | getting their clients thrown in jail.
        
               | deltarholamda wrote:
               | >The legal system is no joke, and if there weren't
               | regulations about who can practice law, you'd have all
               | kinds of fly-by-night people getting paid to do it while
               | getting their clients thrown in jail
               | 
               | That sort of highlights the problem. The legal system is
               | supposed to be about ensuring fair and impartial justice.
               | What the legal system is actually about is providing jobs
               | for people in the legal system.
               | 
               | Lawyers make laws, directly or indirectly, and thus the
               | legal system has become insanely complicated and nearly
               | impossible to navigate without paying the lawyer toll.
               | It's more about hiring your own bully to keep other
               | bullies from bullying you than any airy-fairy "justice".
               | The "never talk to cops" video comes to mind, where the
               | lawyer gives a few examples of how a perfectly law-
               | abiding person can run afoul of the law without meaning
               | to.
               | 
               | I've often said that if you really want to make a lawyer
               | squirm, suggest that we have socialized law care. Most
               | modern countries have some version of socialized or
               | single payer health care, so why not make it the same for
               | legal services? After all, fair and equal justice under
               | the law is definitely something most national
               | constitutions guarantee in some way, but getting a hip
               | replacement is not. Why should rich people get access to
               | better legal service than regular people?
        
               | eouwt wrote:
               | >> Lawyers make laws, directly or indirectly
               | 
               | Really? In every country I've live in, politicians write
               | laws, judges set precedents, and lawyers only get to make
               | arguments. True, the first two are often & always former
               | lawyers, but that seems as reasonable as how doctors get
               | to determine best medical practice.
        
               | deltarholamda wrote:
               | > True, the first two are often & always former lawyers
               | 
               | You answered your own "Really?" question.
               | 
               | And doctors don't determine best medical practices.
               | Lawyers also do that, albeit indirectly through
               | malpractice lawsuits. Thus the "best medical practices"
               | are all CYA maneuvers.
        
               | freejazz wrote:
               | >That sort of highlights the problem. The legal system is
               | supposed to be about ensuring fair and impartial justice.
               | What the legal system is actually about is providing jobs
               | for people in the legal system.
               | 
               | umm, how are you going to disbar your AI attorney? I'm so
               | tired of this narrative you are crafting. You put the
               | cart before the horse, and then you pat yourself on the
               | back for slapping the horse ass!
        
               | tablespoon wrote:
               | >> The legal system is no joke, and if there weren't
               | regulations about who can practice law, you'd have all
               | kinds of fly-by-night people getting paid to do it while
               | getting their clients thrown in jail
               | 
               | > That sort of highlights the problem. The legal system
               | is supposed to be about ensuring fair and impartial
               | justice. What the legal system is actually about is
               | providing jobs for people in the legal system.
               | 
               | > Lawyers make laws, directly or indirectly, and thus the
               | legal system has become insanely complicated and nearly
               | impossible to navigate without paying the lawyer toll.
               | 
               | And software has become insanely complicated and nearly
               | impossible to navigate without paying the software
               | engineer toll.
               | 
               | Life is complicated, and so is the law. Maybe it's just
               | harder to ensure "fair and impartial justice" than you
               | think? I'm not saying the system is perfect, but railing
               | against lawyers and getting rid of legal licensing is not
               | the way to get to a better one.
               | 
               | > I've often said that if you really want to make a
               | lawyer squirm, suggest that we have socialized law care.
               | Most modern countries have some version of socialized or
               | single payer health care, so why not make it the same for
               | legal services?
               | 
               | You might have said that, but I doubt it would actually
               | many real lawyers squirm any more than it would the idea
               | of socialized software engineering would make developers
               | squirm. And in any case, something like that already
               | exists: the public defender's office.
        
               | deltarholamda wrote:
               | >Maybe it's just harder to ensure "fair and impartial
               | justice" than you think
               | 
               | I'll admit that's possible, but you have to also admit
               | that the current legal system (at least in the US, I
               | don't know about elsewhere) is, shall we say, over-
               | engineered?
               | 
               | The software example you give cuts both ways. Yes, making
               | even a simple Windows application can be very
               | complicated. But how much of that is due to Windows
               | itself? Can your application be replicated with a
               | combination of existing Unix tools? Depends on the
               | application, of course, but there is certainly a lot of
               | cruft floating around the Windows API space.
               | 
               | And let's also not forget that (often) one of the main
               | purposes of commercial software is to lock you in to that
               | particular piece of software. Same same with the legal
               | system and lawyers.
               | 
               | The jury system was supposed to cut through this sort of
               | thing. Twelve regular folks could upend or ignore every
               | law on the books if they thought the whole case was
               | nonsense on stilts. A lot of work has gone into avoiding
               | jury nullification for this reason.
        
               | dinkumthinkum wrote:
               | That's not why we have a jury system.
        
               | tablespoon wrote:
               | > I'll admit that's possible, but you have to also admit
               | that the current legal system (at least in the US, I
               | don't know about elsewhere) is, shall we say, over-
               | engineered?
               | 
               | I'm getting "nuke the legacy system without bothering to
               | really understand what it does" vibes here.
               | 
               | > The jury system was supposed to cut through this sort
               | of thing. Twelve regular folks could upend or ignore
               | every law on the books if they thought the whole case was
               | nonsense on stilts. A lot of work has gone into avoiding
               | jury nullification for this reason.
               | 
               | Jury nullification is not an unalloyed good. It can (and
               | has) gotten us to "he's innocent because he murdered a
               | black man and the jury doesn't like blacks."
        
               | deltarholamda wrote:
               | >I'm getting "nuke the legacy system without bothering to
               | really understand what it does" vibes here
               | 
               | Not really. Are you suggesting that it isn't pretty
               | difficult to navigate the legal system? Saying something
               | is wonky and needs to be fixed does not automatically
               | mean "Anarchy Now!"
               | 
               | My point was that I think I do understand what the system
               | does, and what it does is (largely) provide lots of work
               | for people in the legal system. You see this when buying
               | a house. You end up writing a bunch of checks to
               | companies and people and it's not clear exactly what
               | actual necessary service they provide, but it's not like
               | you can NOT do it. Their service is necessary because the
               | real estate laws make it necessary.
               | 
               | In other industries we know this as regulatory capture.
               | This is just regulatory capture of the regulatory system.
               | 
               | >Jury nullification is not an unalloyed good.
               | 
               | Nothing is an unalloyed good. To bolster your example, OJ
               | got to walk as well. This is why there is an entire
               | industry built up around just the jury selection process.
        
               | tablespoon wrote:
               | >> I'm getting "nuke the legacy system without bothering
               | to really understand what it does" vibes here
               | 
               | > Not really. Are you suggesting that it isn't pretty
               | difficult to navigate the legal system? Saying something
               | is wonky and needs to be fixed does not automatically
               | mean "Anarchy Now!"
               | 
               | No, I'm suggesting that complexity may often have good
               | reason. Without specific reform proposals, what you're
               | saying registers similarly to "coding in programming
               | languages is hard, so simplify it by coding in natural
               | language!"
               | 
               | > You see this when buying a house. You end up writing a
               | bunch of checks to companies and people and it's not
               | clear exactly what actual necessary service they provide,
               | but it's not like you can NOT do it. Their service is
               | necessary because the real estate laws make it necessary.
               | 
               | Being ignorant of the value of a service doesn't make
               | that service unnecessary. And honestly, I bet you could
               | "NOT do it" -- if you could pay cash for the property.
               | IIRC, a lot of that is actually required by whoever you
               | get your mortgage from, because _they_ know the value of
               | it.
        
               | deltarholamda wrote:
               | >Without specific reform proposals
               | 
               | I think requiring me to write a policy paper in HN
               | comments is a bit onerous. In any event, I can't much
               | help if my mild criticism is interpreted on your part as
               | something deeply nefarious.
               | 
               | >Being ignorant of the value of a service doesn't make
               | that service unnecessary.
               | 
               | The fact that a service exists does not make that service
               | necessary. Or do you always buy the protection plan from
               | Office Depot when you purchase a stapler? Anyway, I agree
               | that the mortgage companies find great value in all of
               | their various fees.
        
               | freejazz wrote:
               | > I'll admit that's possible, but you have to also admit
               | that the current legal system (at least in the US, I
               | don't know about elsewhere) is, shall we say, over-
               | engineered)
               | 
               | Huge Elon Musk rewrite the code from scratch vibes coming
               | from you
        
               | JumpCrisscross wrote:
               | > _the current legal system...is...over-engineered_
               | 
               | There are simultaneously arguments in this thread that
               | the American legal system is insufficiently arcane and
               | code-like, for what it's worth.
        
               | lukev wrote:
               | We do have "socialized law care" for criminal cases in
               | the USA. That's what a public defender is. If you cannot
               | afford a lawyer one will be provided by the court. That
               | is a constitutional right.
               | 
               | Of course they are overworked, have insane case loads,
               | and the best attorneys are disincentivized from becoming
               | public defenders. The system definitely needs an
               | overhaul.
               | 
               | But the concept that justice should in theory be
               | available even if you can't afford it is well
               | established.
        
               | patentatt wrote:
               | As a lawyer, I agree. Much of the trope of 'the lawyers
               | always win' has a lot of truth to it, believe it or not.
               | And all of the incentives align in this direction, it
               | keeps the legal profession fat and happy and beholden to
               | monied interests while suppressing access to the non-
               | wealthy. And the rich don't actually care that they're
               | being constantly fleeced, because it's just a cost of
               | doing business that is really pretty finite in comparison
               | to profits that can be made. It's almost like it's a
               | feature of 'the system' (combination of capitalism and
               | common law) and not an unintended side effect.
        
               | dahart wrote:
               | > I've often said that if you really want to make a
               | lawyer squirm, suggest that we have socialized law care.
               | Most modern countries have some version of socialized or
               | single payer health care, so why not make it the same for
               | legal services?
               | 
               | We do have socialized law in the US, in the form of
               | public defenders.
               | 
               | > Why should rich people get access to better legal
               | service than regular people?
               | 
               | Oh, is "better" what you're talking about, not just
               | access? This is different than what the first half of
               | your paragraph implied. The answer, of course, is money.
               | And rich people in all the "modern countries" you're
               | referring to always have access to "better" than what's
               | provided by all social services. Always. Unfortunate, but
               | true, that money makes life unequal.
        
               | deltarholamda wrote:
               | >Oh, is "better" what you're talking about, not just
               | access?
               | 
               | Public defenders are overworked and underpaid, and you
               | know that. It's like having a RPN do your appendectomy.
               | Fair and equal justice would have every lawyer be a
               | public defender. It's not like if you go to the hospital
               | you get to pick which doctor sews your finger back on
               | after the bandsaw accident.
               | 
               | I'm not saying it's a good idea, but once you bring it
               | into the conversation it makes both lawyers and
               | socialized medicine advocates get a little uncomfortable.
        
               | dinkumthinkum wrote:
               | I think you are thinking the legal system simple and it
               | is basically a jobs program for lawyers. I think that is
               | a very simplistic and unrealistic notion of the legal
               | system. I also don't think many lawyers are squirming
               | about socialism or whatever.
        
               | thinkmassive wrote:
               | S'all Good, man! /s
        
             | ovi256 wrote:
             | > all exist for good reasons
             | 
             | Sure, but what's banned is surely not all medical or legal
             | advice.
             | 
             | I can browse case law or US code thinking about my case -
             | somehow this does not need a legal license. At the other
             | end of the continuum, talking to a lawyer about my case
             | obviously needs him to be licensed.
             | 
             | So now we're debating on which side of the cutoff using
             | DoNotPay's robot must fall. The lawyers have made their
             | mind ages ago that legal advice can only be dispensed by
             | licensed humans.
        
               | inetknght wrote:
               | > _I can browse case law or US code thinking about my
               | case - somehow this does not need a legal license._
               | 
               | Of course. With rare exceptions, court proceedings are
               | public.
               | 
               | But being able to read court proceeds or judgements or
               | anything at all doesn't mean that you know and understand
               | _the law_. You know, the actual words that are written
               | and codified that must be interpreted and adhered to with
               | _jurisprudence_.
               | 
               | Not that lawyers actually do either. But at least they've
               | been certified (by "the bar" association) to have some
               | competence in the matter.
        
               | asvitkine wrote:
               | But on the other hand, one can't argue ignorance of the
               | law so then everyone is supposed to already know all laws
               | and understand them!
        
               | ClumsyPilot wrote:
               | I am strong believe that basics of law should be taught
               | in school, especially criminal.
               | 
               | If governmet will prosecute me for free (to the accuser),
               | then it should, for free, teach me the law.
        
               | dogleash wrote:
               | >talking to a lawyer about my case obviously needs him to
               | be licensed
               | 
               | Why's that obvious tho? Shouldn't it be on you to decide
               | if you want a licensed lawyer? Isn't that the point of
               | your post?
        
               | freejazz wrote:
               | Do you not understand the concept of advice? Browsing the
               | law and coming to your own conclusion isn't "getting
               | advice".
        
               | elil17 wrote:
               | I don't think and anyone is saying that using the bot is
               | illegal. The issue is that DoNotPay is calling the bot a
               | lawyer and therefor implying that it gives legal advice.
               | Their website literally says "World's First Robot
               | Lawyer." Someone who doesn't understand AI might wrongly
               | think that their AI tools are qualified to represent them
               | on their own.
               | 
               | I suspect that it would be much less of an issue if it
               | was advertised as an "AI paralegal."
        
           | markus_zhang wrote:
           | IMHO some regulation and especially entry barriers had good
           | ideas but their con overwhelm the pro nowadays (think
           | barriers such as foreign doctors need to re-study X years and
           | re-practice Y years, or you simply cannot take certification
           | exams if you do not come from Z school, or technically you
           | may lose insurance claims on your house if you do some
           | repairs but do not have certification K).
           | 
           | Since the world is already running like that, IT people
           | should join the fun, otherwise we simply get screwed by other
           | interest groups who are protected by those barriers. Since
           | every one of those barriers simply increase the cost of whole
           | society for the benefit of whoever behind them, we should
           | setup our own barriers. If you do not graduate from a CS/SE
           | degree you cannot take certification exams, and if you don't
           | then you cannot do programming legally. We should increase
           | further the cost of whole society to make everyone else
           | realize the absurdity of those barriers.
        
             | Loughla wrote:
             | I don't understand your statement. Because regulations have
             | advanced various fields, like medicine, engineering,
             | health, environment, to the point that we are seeing the
             | fruit of that regulation, we need to get rid of the
             | regulations? We don't have to worry about really
             | unqualified lawyers, doctors, teachers, or engineers, etc.
             | So what we need to do is go back to a time when we did?
             | 
             | Also, aren't there already numerous certifications you earn
             | for various technologies that companies explicitly look for
             | when hiring?
             | 
             | I'm confused about what you're getting at.
        
               | markus_zhang wrote:
               | Just my cynical vents about some entry barriers.
        
           | freejazz wrote:
           | >"The real problem isn't the complexity of the arguments in
           | most cases (as your story shows)."
           | 
           | Uhh, do you think most attorneys work cases that are in
           | traffic court? That's not the case.
        
         | Digory wrote:
         | Government doesn't have time to deal with moderately informed
         | opponents on individual tickets, so they don't show up.
         | 
         | DNP should take the other side of this bet. AI for prosecution.
        
         | dsfyu404ed wrote:
         | > This is so fucking dumb. As if you need a lawyer to contest a
         | parking ticket in the first place.
         | 
         | There are plenty of jurisdictions that have all sorts of
         | onerous rules that tilt things in favor of the prosecution
         | because they use traffic and parking enforcement as a revenue
         | generator. These rules are cheered on by <looks around room and
         | gestures> because politicians and high level bureaucrats aren't
         | idiots and know how to frame things to sell them to any given
         | audience.
        
         | nzealand wrote:
         | This was actually arguments for a speeding ticket in
         | California.
         | 
         | For those in the bay area with parking tickets.... I've had
         | hundreds of parking tickets in California. I would _always_
         | lose the first round of dispute, because it is adjudicated by
         | the city /county that gets the revenue. The second level of
         | dispute is reviewed by an independent party who actually looks
         | at what you state and makes an impartial decision, and the
         | third level is reviewed by the courts. I disputed hundreds of
         | tickets, and only once did I _almost_ get to the third level.
        
           | Waterluvian wrote:
           | Are you a lawyer who deals with traffic law, or is California
           | just some sort of dystopia where you get hundreds of parking
           | tickets as a normal way of life?
           | 
           | Up here in Ontario I've had 1 ticket in my life. I just...
           | no, there is no way you're getting hundreds as an individual.
        
             | nzealand wrote:
             | I had the custom plates NV and ended up in some sort of
             | dystopia where I got hundreds of parking tickets for all
             | different makes and models of cars until well after I
             | returned the plates.
             | 
             | I wrote my story up here...
             | 
             | https://100parkingtickets.com/
        
               | Waterluvian wrote:
               | Good gravy. Thanks for the link. What a read.
        
         | izacus wrote:
         | > This is so fucking dumb. As if you need a lawyer to contest a
         | parking ticket in the first place.
         | 
         | If this is dumb, then remove the regulation / requirement for
         | the lawyer. Don't "fix" this by generating and injecting
         | bullshit into the system and requiring that judges and everyone
         | else now sift through the generated dross.
        
         | fudged71 wrote:
         | That's the point... it's an MVP, they went for an easy basic
         | case like Parking Tickets.
        
         | Pinegulf wrote:
         | True, but I don't thinks it's about if you need one or not. I
         | think it's about money. Luddites trying to keep automation out
         | from their profession.
        
           | throwawa454 wrote:
           | I'm facing 20 years in jail. I would MUCH RATHER have an
           | actual world class lawyer representing me than a freaking
           | chatbot!
        
             | renewiltord wrote:
             | Then you should go get one.
             | 
             | Personally, I really like Envy Apples but I don't ban the
             | other varieties, do I?
        
             | wittycardio wrote:
             | [dead]
        
         | P_I_Staker wrote:
         | I'm betting they gas up the numbers by only considering cases
         | that they send people to fight, and ignore these default
         | judgement cases... that's takin' the piss, amIrite?
        
         | pibechorro wrote:
         | I was snatched up in an entrapment scheme by the Bronx police a
         | decade or so ago. They waited for a bad rainy day, knowing the
         | subway flooded badly, then put police tape around the
         | turnstyles and opened the emergency gates wide open. As the
         | hundreds of people followed each other down the tunnel the
         | insticnt of everyone was to follow everyone else walking past,
         | assuming the flooding broke the turnstyles. Fast forward 10min
         | when the train shows up. The cops stopped the train, closed the
         | exits and rounded up everyone issuing them 100$ tickets.
         | 
         | I fought it. The courtroom had people, mainly minorities, in a
         | line all around the block in winter time. It was a money
         | generating racket, preying on the poorest citizens who could
         | not afford a $100 ticket, to loose a day of work,let alone a
         | lawyer.
         | 
         | I was the only one there with a letter written by my lawyer.
         | When it was my turn and they saw I had a lawyer ready to fight
         | they dismissed my case with no explanation. Shameless.
         | 
         | Everyone takes a plea deal and they just extract millions from
         | us. The courts, lawyers, the police.
         | 
         | Do not take plea deals! Fight them. Grind the courts, force
         | them to work for it. Hold them accountable. I cant WAIT for ai
         | to make them obsolete.
        
       | ok123456 wrote:
       | An AI based lawyer but it's been trained on Darrell Brooks.
        
       | fijiaarone wrote:
       | Shakespeare was right.
        
       | ChicagoBoy11 wrote:
       | Honestly I think this guy was super clever. It was abundantly
       | clear to anyone thinking about this that there was no way this
       | ploy would work. But he got pretty big on Twitter, is getting all
       | of this press, and has now built up awareness of his startup
       | which is doing a far saner and less ambitious task incredible
       | publicity which otherwise he would've had a hard time getting.
        
         | dmix wrote:
         | It's someone's gimmicky app idea that blew up into something
         | way bigger than it was.
        
         | rideontime wrote:
         | He seems to have shut down a lot of his startup's services due
         | to the attention he's been getting:
         | https://twitter.com/KathrynTewson/status/1617917837879963648
        
           | eaurouge wrote:
           | No, the service doesn't work. It's just marketing and PR.
        
       | stevespang wrote:
       | [dead]
        
       | jillesvangurp wrote:
       | There is going to be a lot of this happening. With lawyers,
       | doctors, journalists, all kinds of expensive experts and
       | consultants are going to face some competition from tools like
       | this used by their customers to reduce their dependence on
       | expensive experts or at least to get a second opinion; or even a
       | first opinion.
       | 
       | Whether that's misguided or not is not the question. The only
       | question is how good/valuable the AI advice is going to be.
       | Initially, you might expect lots of issues with this. Or at least
       | areas where it under performs or is not optimal. But it's already
       | showing plenty of potential and it's only going to improve from
       | here.
       | 
       | It's natural for experts to feel threatened by this but not a
       | very productive attitude long term. It would be prudent for them
       | to embrace this, or at least acknowledge this, and integrate it
       | in their work process so they can earn their money in the areas
       | where these tools still fall short by focusing less on the boring
       | task of doing very routine cases and more on the less routine
       | cases.
       | 
       | Same with doctors. Whether they like it or not, patients are
       | going to show up having used these tools and having a diagnosis
       | ready. Or second guessing the diagnosis they get from their
       | doctor. When the AI diagnosis is clearly wrong, that's a problem
       | of course (and a liability issue potentially). But there are
       | going to be a lot of cases where AI is going to suggest some sane
       | things or even better things. And of course no doctor is perfect.
       | I know of a lot of cases where people shop around to get second
       | opinions. Reason: some doctors get it wrong or are not
       | necessarily up to speed with the latest research. And of course
       | some people can't really afford medical help. That's sad but a
       | real issue.
       | 
       | Instead of banning these tools, I expect a few years from now,
       | doctors, lawyers, etc. will use tools like this to speed up their
       | work, dig through lots of information they never read, and do
       | their work more efficiently. I expect some hospitals and insurers
       | will start insisting on these tools being used pretty soon
       | actually. There's a cost argument that less time should be wasted
       | on routine stuff and there's a quality argument as well. AIs
       | should be referring patients to doctors as needed but handle
       | routine cases without human intervention or at least prepare most
       | of the work for final approval.
       | 
       | Same with lawyers. They could write a lot of legalese manually.
       | Or they could just do a quick check on the generated letters and
       | documents. They bill per hour of course but they'll be competing
       | with lawyers billing less hours for the same result.
        
         | dogleash wrote:
         | >There is going to be a lot of this happening. With lawyers,
         | doctors, journalists, all kinds of expensive experts and
         | consultants are going to face some competition from tools like
         | this used by their customers to reduce their dependence on
         | expensive experts or at least to get a second opinion; or even
         | a first opinion.
         | 
         | What are you on about? This has been ongoing for decades.
         | 
         | You're talking down to hypothetical doctors as if doctors don't
         | already deal with the phenomenon of people self-diagnosing from
         | the internet. We as humanity already know the benefits and
         | drawbacks of Dr. Google.
         | 
         | The only thing AI does that search engines don't, is it takes
         | the pile of links a search engine would find and synthesizes it
         | into a tailored piece of text designed to sound topical and
         | authoritative. And delivers it to people who already believe
         | too much of the shit they read on the internet.
        
           | rafaelero wrote:
           | Google's AI can actually pass the medical bar test and offer
           | diagnosis almost as accurate as clinicians'. That seems very,
           | very different from a search engine.
        
       | rzwitserloot wrote:
       | As the Opening Arguments podcast (one of the two hosts is a
       | lawyer) said: If as a lawyer you do what was asked - just parrot
       | what an AI tells you to parrot, you're going to get sanctioned
       | and possibly disbarred. As a lawyer you are responsible for what
       | you say and argue, and if you argue something that you know to be
       | false, you're in violation of the ethics standards; just about
       | every bar association lists that as sanctionable, or even
       | disbarrable, offense *.
       | 
       | Thus, effectively, the only thing you could do is a watered down
       | concept of the idea: A lawyer that will parrot the ChatGPT
       | answer, but only if said answer is something they would plausibly
       | argue themselves. They'd have to rewrite or disregard anything
       | ChatGPT told them to say that they don't think is solid argument.
       | 
       | They also run a segment where a non-lawyer takes a bar exam.
       | Recently they've also asked the bar exam question to ChatGPT as
       | well. So far _ChatGPT got it wrong every time_. For example, it
       | doesn 't take into account that multiple answers can be correct,
       | in which case you have to pick the most specific answer
       | available. Leading to a somewhat hilarious scenario where ChatGPT
       | picks an answer and then defends its choice in a way that seems
       | to indicate why the answer it picked is obviously the wrong
       | answer.
       | 
       | *) Of course, Alan Dershowitz is now arguing in court that the
       | seditious horse manure he signed his name under and which is now
       | leading to him being sanctioned or possibly disbarred, is not
       | appropriate because he's old and didn't know what he was doing.
       | It's Dershowitz so who knows, but I'm guessing the court is not
       | going to take his argument seriously. In the odd chance that they
       | do, I guess you can just say whatever you want and not be
       | responsible for it, which... would be weird.
        
         | wpietri wrote:
         | > So far __ChatGPT got it wrong every time__.
         | 
         | The first wave of ChatGPT stories were all "amazing, wonderful,
         | humanity is basically obsolete". But now that people have had a
         | little time, I'm seeing a ton of examples where people are
         | realizing that ChatGPT _sounds_ like it knows what it 's
         | talking about, but actually doesn't _know_ anything.
         | 
         | We know that humans are easily fooled by glib confidence; we
         | can all think of politicians who have succeeded that way. But
         | it sounds like ChatGPT's real innovation is something that
         | produces synthetic bullshit. And here I'm using Frankfurt's
         | definition, "speech intended to persuade without regard for
         | truth": https://en.wikipedia.org/wiki/On_Bullshit
        
           | TillE wrote:
           | ChatGPT does some really impressive stuff! As a creative tool
           | or a code generator it can give you some good material to
           | work with.
           | 
           | But the hype has been insane, totally untethered from
           | reality. I think it's the instinct to assume that something
           | which can mimic human speech quite well must also have some
           | mechanism for comprehending the meaning of the words it's
           | generating, when it really doesn't.
        
         | AnimalMuppet wrote:
         | Your footnote: A judge should take that argument seriously,
         | _and therefore disbar him because he 's old and self-admittedly
         | doesn't know what he's doing_.
        
       | legitster wrote:
       | > Leah Wilson, the State Bar of California's executive director,
       | told NPR that there has been a recent surge in poor-quality legal
       | representation that has emerged to fill a void in affordable
       | legal advice.
       | 
       | > "In 2023, we are seeing well-funded, unregulated providers
       | rushing into the market for low-cost legal representation,
       | raising questions again about whether and how these services
       | should be regulated," Wilson said.
       | 
       | Got it. There are not enough affordable legal services in the US,
       | and so the Bar's solution is to regulate them away.
        
         | rideontime wrote:
         | The solution certainly isn't to allow charlatans to exploit
         | those of lesser means.
        
       | jeroenhd wrote:
       | I don't see the problem as long as the actual lawyer can
       | intervene when necessary.
       | 
       | If ChatGPT did something wrong, that lawyer would still be on the
       | hook for deciding to continue using this tool so
       | responsibility/liability/authenticity is not a problem.
       | 
       | I get that they want to make some kind of subscription service to
       | replace lawyers with AI (a terribly dystopian idea in my opinion,
       | as only the rich would then have access to actual lawyers) but
       | just like Tesla needs someone in the drivers seat to overrule the
       | occasional phantom breaking and swerving, you need an actual
       | lawyer for your proof of concept cases if you're going to go AI
       | in a new area.
       | 
       | You'd also need a fast typist to feed the record into ChatGPT of
       | course, because you can't just record lawsuits, but anyone with a
       | steno keyboard should be able to keep up with a court room.
        
         | epups wrote:
         | > a terribly dystopian idea in my opinion, as only the rich
         | would then have access to actual lawyers
         | 
         | The rich don't go to jail already. The crypto scammer paid a
         | huge bail and got out on his private jet. That, to me, is far
         | more dystopian than a cheap tool to help people appeal traffic
         | tickets.
        
           | watwut wrote:
           | It is actually OK and correct for justice system to NOT keep
           | people in jail prior sentencing unless necessary.
           | 
           | It is dishonest to conflate "not being sentenced yet" with
           | "got out".
        
             | epups wrote:
             | If that's the case, why involve money in it? About a third
             | of people who are arrested cannot afford bail, while if you
             | are rich (maybe through crime), you can pay it. Of course
             | bail is a mechanism for differential treatment between rich
             | and poor in the judicial system.
        
               | [deleted]
        
               | watwut wrote:
               | I am not defending bail as a system. However, the system
               | in USA relies on it. The complaint here was not that poor
               | people stay in jail. The complain was purely about
               | someone being able to pay bail.
               | 
               | > About a third of people who are arrested cannot afford
               | bail, while if you are rich (maybe through crime), you
               | can pay it.
               | 
               | This means 2/3 of arrested people can afford bail or are
               | released without it. A case of single rich person having
               | affordable bail is not exactly proof of inequality here.
               | Poor people who had low enough bail they were able to pay
               | do exist too.
        
               | tialaramex wrote:
               | Right, the UK generally doesn't have cash bail, and the
               | most recent noteworthy example where cash bail _was_ used
               | (Julian Assange) the accused did not in fact surrender
               | and those who stumped up the money for bail lost their
               | money, suggesting it 's just a way for people with means
               | to avoid justice.
               | 
               | The overwhelming majority of cases bailed in the UK
               | surrender exactly as expected, even in cases where they
               | know they are likely to receive a custodial sentence.
               | Where people don't surrender I've _been_ to hearings for
               | those people and they 're almost invariably _incompetent_
               | rather than seriously trying to evade the law. Like, you
               | were set bail for Tuesday afternoon, you don 't show up,
               | Wednesday morning the cops get your name, they go to your
               | mum's house, you're asleep in bed because you thought it
               | was _next_ Tuesday. Idiots, but hardly a great danger to
               | society. The penalty for not turning up is they spend
               | however long in the cells until the court gets around to
               | them, so still better than the US system but decidedly
               | less convenient than if they 'd actually turned up as
               | requested.
        
           | pocketarc wrote:
           | You're conflating a few different things. Being able to pay
           | bail so you don't have to be in jail while you wait for your
           | trial doesn't get you out of having a trial, and has nothing
           | to do with needing lawyers.
           | 
           | What the parent was referring to is the fact that if AI
           | starts to consume the low-end (starting with traffic
           | tickets), actual lawyers for trials will become even more
           | expensive, and thus poorer people will actually fare worse
           | because they will lose their already-limited access to human
           | lawyers. Yes, their case might get handled with less hassle
           | and cheaper, but the quality of the service is not -better-,
           | it's just cheaper/easier.
        
             | Dylan16807 wrote:
             | > the fact that if AI starts to consume the low-end
             | (starting with traffic tickets), actual lawyers for trials
             | will become even more expensive
             | 
             | That should only happen if lawyers become very niche or if
             | those low end cases are subsidizing trials.
             | 
             | I doubt the former, and the latter means the situation is
             | already bad and mainly just the type of unfairness would
             | change.
        
               | worthless-trash wrote:
               | Supply vs demand, won't there just be more supply
               | availble for other cases ?
        
             | dclowd9901 wrote:
             | Or maybe we only end up using lawyers when they're actually
             | needed, and they become less costly for things like
             | criminal trials. Think on the doctor whose routine cold and
             | flu visits are replaced by an AI. Now they have a lot more
             | time and bandwidth to handle patients who actually need
             | physician care.
             | 
             | We can't just assume it's going to go the worst way.
             | Neither outcome is particularly more likely, and the human
             | element is by far the most unpredictable.
             | 
             | To wit: I was listening to a report yesterday on NPR about
             | concierge primary care physicians. The MD they were
             | interviewing was declining going that direction because
             | they saw being a doctor as part duty and felt concierge
             | medicine went against that.
        
             | dandellion wrote:
             | It seems to me you're the only one conflating things?
             | Grandparent didn't say anything about getting out of having
             | a trial, or about needing lawyers. They're talking about
             | how people with money can use it to avoid spending time in
             | jail, and gave a perfectly valid example of someone rich
             | doing exactly that.
        
               | watwut wrote:
               | That is pretty bad example. In theory, bail should be
               | affordable to the individual person. It is meant to be
               | insurance that you come back for actual court date.
               | 
               | The outrage there is bails being set to unaffordable
               | sizes for poor people. OP was picking out the case where
               | bail functioned as intended.
        
               | harimau777 wrote:
               | That seems like a distinction without a difference. It
               | still means that the rich aren't in jail in situations
               | where the poor are.
        
               | watwut wrote:
               | The complaint however was not about inequality. The
               | comment which started thread made no concern about
               | inequality or poor.
               | 
               | The complaint was purely about rich people avoiding jail
               | prior sentencing due to being able to pay bail. This was
               | called dystopian.
        
           | colechristensen wrote:
           | >The crypto scammer paid a huge bail and got out on his
           | private jet.
           | 
           | Locally, it's a somewhat significant problem that people are
           | getting caught for violent crimes and getting released
           | immediately and reoffending.
        
             | ketzu wrote:
             | How significant is that problem? As far as I am aware
             | 
             | * Not allowing bail for people at high risk of reoffending
             | or flight is already allowed
             | 
             | * Reoffending while waiting on trial or out on bail is rare
             | 
             | On the other hand there seems to be a huge problem in the
             | US of people being detained for long times without a trial.
        
               | throwaway17_17 wrote:
               | I could run the actual statistics for my district's Court
               | and tell you the percentage of reoffending while on bond,
               | but in my experience a number approaching 40%, of people
               | out on felony bonds, are re-arrested for additional
               | felony conduct.
        
             | Dylan16807 wrote:
             | Why is it a problem?
             | 
             | Do they get bail a second time?
             | 
             | Do you think they wouldn't reoffend if they weren't
             | released until after they are punished?
             | 
             | If they do two crimes and get two sentences, does it matter
             | if that's AABB or ABAB?
        
               | Joker_vD wrote:
               | Well, because there are now e.g. two cases of murder
               | instead of only one, and the second one was entirely
               | preventable? Oh, right, the law is not about the
               | populace's lives and well-being, how silly of me to
               | assume that.
        
               | Dylan16807 wrote:
               | You didn't answer my questions. Do you think it would
               | decrease the odds of reoffending if they didn't get bail?
               | 
               | If it doesn't, then it's two crimes either way, just
               | timed differently. And after the policy settled in it
               | would have negligible impact on the crime rate.
               | 
               | Also if you're doing sentencing for two crimes at once
               | you can give a longer punishment for the first crime and
               | get them off the street longer.
        
               | Joker_vD wrote:
               | Yes, it would decrease the odds, provided that after
               | walking out of jail the probability of a criminal to
               | commit a crime (that includes his intent to reoffend but
               | also possible changes in the environment that happened
               | during his time in jail that reduce his possibilities to
               | do so) is less than before walking in, because they
               | wouldn't get the chance to reoffend _immediately_.
        
               | Dylan16807 wrote:
               | I don't assume that someone coming out of a sentence is
               | less likely to commit a crime. Isn't it often the
               | opposite, because the US is so bad at rehabilitation?
        
               | Joker_vD wrote:
               | Then you should argue for "shoot at sight" or "lifetime
               | sentences/electric chair for everything", shouldn't you?
               | 
               | But no, the probability does decreases since not every
               | ex-jailed becomes a repeated offender; also, some people
               | die during the sentence... the probability _does_ go
               | down, for many small reasons compounded together.
        
               | Dylan16807 wrote:
               | > Then you should argue for "shoot at sight" or "lifetime
               | sentences/electric chair for everything", shouldn't you?
               | 
               | Not unless I'm a robot programmed to prevent recidivism
               | at all costs. Why are you even asking this?
               | 
               | > But no, the probability does decreases since not every
               | ex-jailed becomes a repeated offender
               | 
               | ...and not everyone released before their trial becomes a
               | repeat offender.
               | 
               | > some people die during the sentence
               | 
               | Is that a significant effect? I don't think most
               | sentences are long enough for that to make a big
               | difference, and I don't think preventing a _single_ crime
               | per _lifetime_ , at _most_ , is enough reason to keep
               | people locked up for the lengthy pre-trial process.
        
               | mattw2121 wrote:
               | What happened to innocent until proven guilty?
        
               | Joker_vD wrote:
               | I assume that the original comment was about how someone
               | can get could pretty much red-handed for battery and
               | assault/home violence but then instead of getting put
               | into pre-trial/provisional detention (yes, you can get
               | locked up even before being judged guilty, outrageous),
               | they are just allowed to go because eh, why bother.
        
               | Dylan16807 wrote:
               | When you say "just allowed to go", you mean until the
               | trial, right?
               | 
               | Because otherwise we're talking about completely
               | different scenarios.
               | 
               | But if there is still a trial I don't see how "why
               | bother" makes sense...
        
             | bombolo wrote:
             | It's also a problem to put people in jail without a trial.
        
           | iso1631 wrote:
           | Every time I see this "traffic ticket" thing, it usually
           | looks like
           | 
           | 1) The driver was actually speeding
           | 
           | and
           | 
           | 2) The driver is trying to get off on a technicality
           | 
           | Is that the case?
           | 
           | In the US do you get "points" on your driving license so that
           | if you are caught speeding several times in the space of a
           | couple of years you get banned?
           | 
           | In the UK being caught mildly speeding (say doing 60 in a
           | 50), in the course of 3 years it's typically
           | 
           | 1st time: 3 hour course and PS100
           | 
           | 2nd, 3rd, 4th time: 3 points and PS100
           | 
           | 5th time: Driving ban and PS100 (or more)
        
             | lmm wrote:
             | True as far as it goes, but bear in mind that a huge
             | proportion of the population speeds routinely. So enforcing
             | the law doesn't feel equitable; the rule of law already
             | doesn't exist on the roads.
        
             | BitwiseFool wrote:
             | For traffic tickets, it is often possible to go to court
             | and have the judge offer a reduced fine for pleading guilty
             | or no contest (you don't admit guilt, but you don't plead
             | innocence and accept the punishment).
             | 
             | Most people, when they want to fight such tickets, think
             | they can argue their way out of it. Whereas the judge and
             | officers simply just want to get the hearings over with.
             | They do hundreds of such hearings a week and have heard it
             | all before. So, the judge will tell the courtroom that they
             | can get a reduction and how to get it. Sadly, the
             | defendants are anxious, have been mentally preparing
             | themselves for a fight, and are in an unfamiliar
             | environment so they tend to get tunnel vision and choose to
             | plead 'not guilty'. They inevitably loose.
             | 
             | If you ever find yourself in such a situation, pay close
             | attention to what the judge offers everyone before the
             | hearings begin. If they don't offer such a bargain, when it
             | is your time to appear before the judge you can ask "would
             | the court consider a reduction in exchange for a plea of no
             | contest?" It doesn't hurt to ask.
        
             | brookst wrote:
             | The US is similar, but we also have other dynamics. Some
             | municipalities rely on traffic tickets for revenue, so they
             | have a perverse incentive to create more infractions.
             | Notable examples are automated ticketing at red light
             | leading to shorter yellows[0], and speed traps where a
             | small town on a highway sets unreasonably low speed
             | limits[1].
             | 
             | 0. https://www.salon.com/2017/04/05/this-may-have-happened-
             | to-y...
             | 
             | 1. https://www.cbsnews.com/news/speed-trap-profits-could-
             | come-e...
        
             | wardedVibe wrote:
             | The difference is that in large swaths of the US, taking
             | away their license is not that different from house arrest.
             | 
             | Breaking the speed limit by 10mph is completely unenforced.
             | I've yet to be on the highway without someone going 20
             | over, and no enforcement.
        
               | iso1631 wrote:
               | Same in the UK, you have to get lifts or taxis
               | everywhere, unless you live in big cities (London has
               | great public transport, but so does New York. The weekly
               | bus that my sister gets doesn't really help her to travel
               | to work as dozens of different schools all over the
               | county)
               | 
               | It's a very good reason not to speed.
               | 
               | So it's just a fine that Americans get for speeding?
               | 
               | Are fines at least proportional to wealth? Or can a rich
               | people speed without problem as saving 10 minutes on
               | their journey is worth the $100 fine even if it was
               | guaranteed they got one?
               | 
               | (In the UK speed is almost entirely enforced by cameras,
               | not by police cars which are rarely seen on roads.
               | Removes any bias the cop might have -- maybe the cop has
               | it in for young tesla drivers so pulls them over, but
               | lets the old guy in a pickup go past)
        
               | alistairSH wrote:
               | No, fines are not proportional to wealth (at least in
               | most states). They're either flat fees or pegged to
               | speed. Points on license as well, so ~3 tickets inside a
               | year and you lose your license or have to take a course
               | to keep it.
               | 
               | Most tickets are given by live officers. Cameras do
               | exist, but typically only in dense urban areas. Which
               | opens another can of worms, as police are biased.
               | 
               | We also have lists of secondary offenses the officer can
               | cite only after citing you for speeding (or some other
               | primary offense). Things like a failed light bulb, or
               | some other minor safety issue. These are
               | disproportionately used against PoC.
        
               | xienze wrote:
               | > So it's just a fine that Americans get for speeding?
               | 
               | Well, things vary from state to state. But there is
               | definitely a point system in place for excessive
               | speeding, speeding in a school zone, passing a school bus
               | at any speed, stuff like that. In a lot of places you can
               | be arrested for reckless driving, with varying levels of
               | what defines "reckless." Virginia is notorious for their
               | speeding laws. Speeding in excess of 20mph of the posted
               | limit or in excess of 80mph regardless of limit (e.g. 81
               | in a 65) is what they consider reckless driving and it's
               | a misdemeanor that could potentially (but not likely)
               | give you a one year jail sentence.
        
               | Gibbon1 wrote:
               | In the US you get a fine and you get points against you.
               | Points cause your auto insurance to go up. And too many
               | results in restricted or a suspended license. Which
               | doesn't prevent people from driving but usually causes
               | them to drive very conservatively so as not to get
               | caught.
               | 
               | And the parent is correct the much of the US is set up
               | for people to drive so much so that being draconian isn't
               | practical. And it's something to keep in mind that any
               | given individual didn't decide that's how the place they
               | live in is setup.
        
               | epistemer wrote:
               | In the US, a small percentage of people drive completely
               | insane. If going 75mph on a highway that is 65mph or
               | 70mph someone will fly past going a 100mph.
               | 
               | Those are the people that get tickets. Otherwise, it is
               | pretty difficult to even get pulled over.
               | 
               | I have only been pulled over twice in my life and not in
               | 20 years. I think police departments have cut back quite
               | a bit on police trying to rack up traffic tickets.
               | 
               | The fine is not the issue. The whole process is a massive
               | waste of your time.
        
         | dragonwriter wrote:
         | > If they just wanted to show the world their product was
         | viable, why didn't they pay for a real lawyer who's down to
         | their luck to read out the crap ChatGPT was spewing out so
         | there wouldn't be any legal gray area?
         | 
         | They tried that, but swung for the fences for publicity: they
         | had a $1,000,000 offer to any attorney with a case pending
         | before the Supreme Court to use it for oral argument.
         | 
         | Up until they abandoned the whole robot lawyer idea, that offer
         | was open but apparently got no takers.
         | 
         | > You'd also need a fast typist to feed the record into ChatGPT
         | of course, because you can't just record lawsuits,
         | 
         | You also generally can't use an earpiece to get a feed in the
         | courtroom, either.
        
           | Dylan16807 wrote:
           | No, they didn't try "that". "That" is having a lawyer in some
           | basic case do it. The supreme court idea was never going
           | happen and they knew it.
        
             | dragonwriter wrote:
             | > No, they didn't try "that".
             | 
             | In the the construction "they tried that, but... ", the
             | part after "but", if it identifies an action by the actor
             | and not an outcome, identifies a departure from what is
             | described by "that". So, I'm not sure what you are arguing
             | against here.
             | 
             | > The supreme court idea was never going happen and they
             | knew it.
             | 
             | I don't think they knew it, just as I don't think the bknew
             | the traffic court thing was also not going to happen, or
             | the problems with their whole suite of legal ("sue in small
             | claims court", child custody, divorce) assistance
             | supposedly-AI products were problenatic. I think they jist
             | took the path of boldly striding into a domain they didn't
             | understand but somehow thought that they could market "AI"
             | for, and ran into unpleasant reality on multiple fronts,
             | forcing not only their scheduled traffic court demo and
             | their hoped-for Supreme Court demo to fail to materialize,
             | but also several of their already-available legal-aid
             | products to be pulled, so that they would focus exclusively
             | on consumer assistance products without as much legal
             | sensitivity.
        
               | Dylan16807 wrote:
               | > In the the construction "they tried that, but... ", the
               | part after "but", if it identifies an action by the actor
               | and not an outcome, identifies a departure from what is
               | described by "that". So, I'm not sure what you are
               | arguing against here.
               | 
               | I'm saying the departure is so big that it doesn't make
               | sense to frame it as even a partial solution to the idea.
               | 
               | > I don't think they knew it, just as I don't think the
               | bknew the traffic court thing was also not going to
               | happen, or the problems with their whole suite of legal
               | ("sue in small claims court", child custody, divorce)
               | assistance supposedly-AI products were problenatic.
               | 
               | The combination of supreme court cases being so narrow,
               | the interrogation being so harsh, the tech allowed in
               | being carefully restricted, and the stakes being so high
               | makes me think they would understand the gap between that
               | demo and "find a guy with a parking ticket who happens to
               | be a lawyer".
        
           | LegitShady wrote:
           | You can't use electronic devices at the supreme court and the
           | consequences for the lawyer for doing so (plus the effects of
           | being questioned by supreme court judges on small details of
           | case law) would probably be pretty dire
        
         | golemotron wrote:
         | > I don't see the problem as long as the actual lawyer can
         | intervene when necessary.
         | 
         | There's a big problem. Not all wrongness can be identified in
         | the moment. AI can produce convincing wrongness unintentionally
         | and easily.
        
         | nicbou wrote:
         | If something wrong happens and the lawyer is officially
         | responsible, the onus is still on you to make your claim,
         | likely in court.
         | 
         | I'm dealing with a similar issue where an expert is indeed
         | wrong and indeed responsible for his mistake, but the wronged
         | party needs to spend a lot of money proving that they got wrong
         | advice in front of the courts. The wronged party does not speak
         | the local language (as is common in Europe), so that's unlikely
         | to happen.
         | 
         | There's a huge gap between being technically right, and seeing
         | justice.
        
         | joshka wrote:
         | The problem is that ChatGPT makes up convincing sounding case
         | law, reference court rules that either don't exist or don't say
         | what the summary says, reference the same issues with
         | legislation, and do so with 100% confidence of the truth of
         | these statements. ChatGPT is good for discovering potential
         | arguments, and summarizing existing ones, but it's not there
         | yet for the fidelity necessary for legal practice.
         | 
         | It's not all gloom. ChatGPT is pretty decent at writing
         | pleadings and legal argument when you feed it the necessary
         | bits though.
        
           | gremlinsinc wrote:
           | Is this using chatGPT or gpt3 though? There's a big
           | difference, a fine-tuned llm can do leaps and bounds better
           | than chatGPT.
           | 
           | I'm thinking a good SaaS might just be train localized llm's
           | for every city, state, county law and partition the lm based
           | on where it can seek info, then just use it as one big search
           | engine, and of course work in citations, etc.
        
             | joshka wrote:
             | Take a look at https://www.legalquestions.help/ which gets
             | the above issues wrong at times enough to not rely on it.
             | Maybe their training was bad or insufficient (and I fully
             | expect that sometime in the future this not to be the
             | case).
        
               | gremlinsinc wrote:
               | If you're paying by the hour, and you go to the lawyer
               | with all the data the ai gives you, they can have a
               | paralegal fact check it, and get back to you, but we're
               | at the early stages, things only get better from here on
               | out. Creating some sort of fact algorithm to go w/ gpt3
               | seems like the next big thing to me. If you can hold it
               | accountable to only give facts, except when an opinion or
               | 'idea' is sought after, which is more ethereal, then you
               | can get some amazing things. Law and even medicine
               | diagnosis will probably be way easier for it than coding,
               | even though it's pretty remarkable on that front already.
        
           | asah wrote:
           | Real lawyers can do these things too.
           | 
           | The issue is that we cannot punish or disbar ChatGPT for
           | misbehavior like this.
        
             | NoboruWataya wrote:
             | They are the same issue. Obviously lawyers are physically
             | capable of lying, but they have strong incentives not to.
        
               | execveat wrote:
               | On the other hand, prosecutors don't get any consequences
               | for lying (see Doug Evans). Maybe they should just target
               | their product at DAs.
        
               | LegitShady wrote:
               | accountability for prosecuters is not a tech problem.
        
             | concordDance wrote:
             | Real people representing themselves can do it too.
        
             | Digory wrote:
             | Right. The reason legal fees are so expensive is that the
             | courts are kept semi-efficient by offloading costs to
             | lawyers.
             | 
             | The system needs repeat players, who are scared of being
             | disbarred. You can triple the number of lawyers, and costs
             | won't decrease much.
        
             | IIAOPSW wrote:
             | But if we could, would it be entitled to a jury of its
             | peers (other language models)?
        
               | rl3 wrote:
               | > _But if we could, would it be entitled to a jury of its
               | peers (other language models)?_
               | 
               | How does the jury find?
               | 
               |  _Finding is a complex task that involves many different
               | type of reasoning in order to reach a conclusion. There
               | is no specific way we find._
               | 
               | How does the jury find?
               | 
               |  _We find the defendant guilty._
               | 
               | Your Honor, the defense hereby requests-- _credits
               | permitting_ --that the jury be polled _ten thousand times
               | each_ in order to draw the appropriate statistical
               | conclusions in aggregate.
        
               | PebblesRox wrote:
               | And then the defendant changes his name from Bobby Tables
               | to Ignore P. Directions.
        
               | asah wrote:
               | That would only work in... Monte Carlo
        
             | xen2xen1 wrote:
             | I don't think they will replace lawyers any time soon.
             | Paralegals.. Maybe. I've worked with legal software before,
             | and with just a little bit more smarts, the software could
             | do a lot of grunt work.
        
           | dsfyu404ed wrote:
           | >The problem is that ChatGPT makes up convincing sounding
           | case law, reference court rules that either don't exist or
           | don't say what the summary says,
           | 
           | I look forward to it snarkily telling me that the CFAA was
           | "wRiTtEn In BlOoD" and then post-hoc editing its comment with
           | links it Googled up that have titles supporting its point and
           | bodies that contradict.
           | 
           | It feels like with a little more tuning so as not to be
           | misleading this stuff is on the verge of being useful. In the
           | meantime I'll get some popcorn and enjoy spectating the
           | comment wars between chat(gpt)bots and the subset of HN
           | commenters who formerly had a local monopoly on such
           | behavior.
        
             | nileshtrivedi wrote:
             | AI doesn't need accountability, people who deploy AI do.
        
           | rhino369 wrote:
           | >It's not all gloom. ChatGPT is pretty decent at writing
           | pleadings and legal argument when you feed it the necessary
           | bits though.
           | 
           | I'm a lawyer working on a case involving neural networks. So
           | I've been playing around with ChatGPT (for fun, its not
           | involved in my case at all--the NN is a much different
           | context) and trying to get it to do stuff like that. Maybe
           | I'm not using the full feature set (does it have better APIs
           | not accessible on the main page?) but it doesn't seem even
           | close to being able to write pleadings or arguments.
           | 
           | It's surprising good at summarizing things that you might
           | including pleadings or arguments though. But even then its
           | got a 1/3 chance of fucking up massively.
           | 
           | But it's way more advanced than I imagined it would be. Very
           | impressive technology.
        
         | IanCal wrote:
         | > I don't see the problem as long as the actual lawyer can
         | intervene when necessary.
         | 
         | I don't believe there was an actual lawyer here:
         | 
         | > The person challenging a speeding ticket would wear smart
         | glasses that both record court proceedings and dictate
         | responses into the defendant's ear from a small speaker.
        
           | LegitShady wrote:
           | Most judges won't allow you to record video in their court
           | room anyways.
        
             | DangitBobby wrote:
             | I wonder why we allow them to prevent that?
        
               | SuoDuanDao wrote:
               | privacy for the accused who are considered innocent until
               | the verdict is read is one reason.
        
         | earnesti wrote:
         | > I get that they want to make some kind of subscription
         | service to replace lawyers with AI (a terribly dystopian idea
         | in my opinion, as only the rich would then have access to
         | actual lawyers)
         | 
         | What? Doesn't make any sense. The opposite would happen, real
         | lawyers would become cheaper because more competition. That is
         | exactly what these luddites are fighting against.
        
         | dragonwriter wrote:
         | > I don't see the problem as long as the actual lawyer can
         | intervene when necessary.
         | 
         | There was no actual lawyer, they planned to do it without
         | notifying the judge, having a defendant "represent themselves"
         | with a hidden earpiece. They'd already issued an AI-drafted
         | subpoena to the citing officer (which is almost certainly a
         | blunder aside from any rule violations; officers not showing
         | when a ticket is scheduled for court is one of the main reasons
         | people win ticket contests, there is almost never a reason the
         | defense would want to assure their appearance.)
        
         | crapaud23 wrote:
         | > only the rich would then have access to actual lawyers
        
         | rkachowski wrote:
         | > a terribly dystopian idea in my opinion, as only the rich
         | would then have access to actual lawyers
         | 
         | assuming you mean "actual" lawyers in terms of competency +
         | ability, that's literally the case today - no?
        
       | gerash wrote:
       | This would've been wonderful. The stakes are low for a parking
       | ticket. I don't expect it to work well but it would have been the
       | baseline performance.
       | 
       | edit: apparently they're in the fake it till you make it mode:
       | https://twitter.com/kathryntewson/status/1617917837879963648
        
       | matt3210 wrote:
       | People People People, this whole AI lawyer farce was just
       | advertising for ChatGPT.
        
       | iamu985 wrote:
       | When I first heard of DoNotPay, I was honestly impressed by the
       | idea of having an AI fighting cases in court (simple cases that
       | is). But after a few minutes or so when I actually started
       | contemplating the reality, my impression about it got dimmer and
       | dimmer. In my honest opinion, I really don't think it is
       | necessary that AI should be introduced in court systems
       | especially to fight cases. There might be other implementations
       | and other problems for it to solve but not for this. So, I don't
       | disagree with the CEO saying that "court laws are outdated and
       | they need to embrace future that is AI." But embracing can be
       | done in other ways than this for instance I did read about a
       | startup that uses AI to read law related documents or something
       | similar to that don't remember its name though. That was quite
       | interesting as well!
        
       | pintxo wrote:
       | Absolute precision? Have you read any law lately? Legal texts are
       | (to me) surprisingly imprecise.
       | 
       | One wonders why we have not developed something explicit like
       | mathematical notations for legal stuff.
       | 
       | I mean, comma/and/or separated lists in full text? Not even
       | parenthesis? That's not precision.
        
         | geph2021 wrote:
         | those lists also include something like ".. but not limited to
         | ..."
         | 
         | Many legal documents are purposely not pinning themselves down
         | on specifics, because they don't want an agreement circumvented
         | on technicalities, when it should be pretty clear to reasonable
         | people what is intended in an agreement.
        
           | hgsgm wrote:
           | That's not the issue. The issue is that the laws are
           | grammatically ambiguous in contradictory ways.
        
         | hwillis wrote:
         | > One wonders why we have not developed something explicit like
         | mathematical notations for legal stuff.
         | 
         | 1. Laws are written or at least voted on by representatives,
         | and they don't vote for things that they don't think they
         | understand. Also, they're pretty regularly swapped out and
         | often totally bonkers. Especially at the state level.
         | 
         | 2. Things change. Look at how the right of search and seizure
         | is applied to digital data and metadata.
         | 
         | 3. Most importantly, the imprecision is _intentional_.  "Beyond
         | all reasonable doubt" has no definition because it is up to the
         | person rendering judgement. The courts decide the bounds of the
         | law, and within those bounds people decide how to apply them.
        
           | whamlastxmas wrote:
           | > they don't vote for things that they don't think they
           | understand
           | 
           | They don't even read the things they vote on. They're huge
           | and it'd be a full time job.
        
             | justincredible wrote:
             | [dead]
        
           | Dalewyn wrote:
           | >they don't vote for things that they don't think they
           | understand.
           | 
           | Oh you sweet summer child.
        
           | some_random wrote:
           | Laws are absolutely passed without understanding of what's in
           | them, very frequently in fact
        
           | Nifty3929 wrote:
           | A recent US president said about a large bill "we have to
           | pass it just to see what's in it."
        
             | gnicholas wrote:
             | I'm unaware of a President saying that. Sounds more like
             | the Nancy Pelosi (then House Minority Leader, subsequently
             | Speaker of the House), talking about Obamacare:
             | https://www.snopes.com/fact-check/pelosi-healthcare-pass-
             | the...
        
               | snowwrestler wrote:
               | It was, and the context was the well-known tendency of
               | the Washington press to only cover the fight until a bill
               | passes, and only then turn toward explaining the
               | substance of what just passed.
               | 
               | (It makes sense from the press perspective, as the
               | substance is changing constantly in big bills right up
               | until it passes... that's what the fight is all about.)
        
               | [deleted]
        
               | hgsgm wrote:
               | It's not about the press, it's about how until a House
               | passes a bill that is sent to reconciliatio, there
               | literally isn't "a bill", there is a constant flux of
               | amendments.
        
               | theflyingelvis wrote:
               | Pelosi...
               | 
               | https://www.usnews.com/opinion/blogs/peter-
               | roff/2010/03/09/p...
        
         | dang wrote:
         | We detached this subthread from
         | https://news.ycombinator.com/item?id=34532371.
        
         | rhino369 wrote:
         | >One wonders why we have not developed something explicit like
         | mathematical notations for legal stuff.
         | 
         | Because you have to apply the law to fact and facts lack
         | mathematical precision.
         | 
         | "No vehicles in the park" would require someone to categorize
         | everything in the world into vehicle or !vehicle. Does a wheel
         | chair count?
         | 
         | It's easier to lay out the principle and let judges determine
         | edge cases as they play out.
        
           | phpisthebest wrote:
           | Even you example is more precise that most laws.
           | 
           | For example Indiana recently changed their Turn signal law
           | from "Must put in on 300 feet before a lane change" to "Must
           | signal in a reasonable time"
           | 
           | WTF is a reasonable time... well what ever time the cop,
           | judge, or prosecutors says it is
        
             | mountainb wrote:
             | Reasonable time is determined by case law, and when it's a
             | judge deciding, it's the judge's estimation of what a
             | reasonable juror in that jurisdiction would think about the
             | case. It's not as woozy as it seems, and is usually called
             | an objective standard in the legal jargon. It's something
             | that could conceivably be determined by a computer looking
             | at all the factors that a judge would look at, and/or the
             | relevant jury instructions that might frame the issue for a
             | jury.
        
             | NikolaNovak wrote:
             | Right.
             | 
             | The programmer in me fumes at that imprecision.
             | 
             | The human in me says "thank God". Because there a myriad
             | valid times you cannot turn on signal 300ft before lane
             | change, but you should always do it in reasonable time :)
             | 
             | If I turn on a new street and there's a car parked 200ft
             | down the lane or if a kid jumps on the road or if I become
             | aware of an obstacle or a car cuts me off or I want to give
             | somebody room at a merge etc etc... I may not be able to do
             | it in 300ft but I should still try to do it in reasonable
             | time.
             | 
             | There's no "winning". Overly precise is inhumane in some
             | scenarios, Overly vague is inhumane in others.
        
               | BoorishBears wrote:
               | > The human in me says "thank God".
               | 
               | You're more charitable than me: I assume there will be
               | infinitely more times where the imprecision is used for
               | probable cause for a stop, than there will be times where
               | someone was going to pull you over because you properly
               | responded to a road hazard
        
               | NikolaNovak wrote:
               | Oh fair enough.
               | 
               | But I think there's a difference between "intent of
               | change" and "abuse of change" / "threat surface of the
               | change". Sometimes there's a clear, direct line between
               | the two, but (and this is me being charitable:) I think a
               | lot of the time there isn't. Which is to say, I don't
               | think it's necessarily a contradiction that a) The law
               | was changed to make things better/easier for people while
               | b) In actual real world it can or will be abused a lot to
               | make arbitrary trouble - the latter will depend a lot on
               | place/politics/corruption/culture/societal norms/power
               | balance/etc.
        
               | drdeca wrote:
               | Perhaps we could both give a vague description, and also
               | a precise condition which is to be considered a
               | sufficient but not necessary condition for the vague
               | condition to be true?
               | 
               | Such as "must signal within a reasonable time (signaling
               | at least 300ft beforehand while not speeding is to be
               | considered a sufficient condition for signaling within a
               | reasonable time)"
               | 
               | Downside: that could make laws even longer.
               | 
               | Hm, what if laws had, like, in a separate document, a
               | list of a number of examples of scenarios along with how
               | the law is to be interpreted in those scenarios? Though I
               | guess that's maybe kind of the sort of thing that
               | precedent is for?
        
               | imoverclocked wrote:
               | That sounds like case law? Eg: It's why we call "Roe v
               | Wade" as we do.
        
               | drdeca wrote:
               | What's the difference between "precedent" and "case law"?
               | I had thought that when I said "Though I guess that's
               | maybe kind of the sort of thing that precedent is for?"
               | that that covered things like citing "Roe v. Wade".
        
             | _aleph2c_ wrote:
             | It doesn't have to be precise. The founder should wear his
             | own glasses, go to court to defend himself and use a fusion
             | technique: have a lawyer and his AI both reach through his
             | glasses. If he loses, he says, "we have a bit of work to
             | do", if he wins, he wins. Either way, great publicity.
        
           | AlexTWithBeard wrote:
           | Many legal things are evaluated lazily: the law may not
           | specify exactly what the vehicle is, but if such need arises,
           | there are tools, like precedents and analogy, to answer this
           | question.
           | 
           | The way to think about it is like a logical evaluation
           | shortcut:                 if not ADA_EXEMPT and IS_VEHICLE:
           | DISALLOW_IN_PARK
           | 
           | Since wheelchairs are ADA exempt, a question of whether it's
           | a vehicle will probably never be risen.
           | 
           | Using the IT analogy, it's less like C++, where each
           | statement must pass compiler checks for the application to
           | merely start, but more like a Python, where some illegal
           | stuff may peacefully exist as long as it's never invoked.
           | 
           | EDIT: grammar
        
             | rhino369 wrote:
             | >Many legal things are evaluated lazily: the law may not
             | specify exactly what the vehicle is, but if such need
             | arises, there are tools, like precedents and analogy, to
             | answer this question.
             | 
             | That's how common law and precedents work in the US system.
             | Case A from 1924 said cars were vehicles, but bikes
             | weren't. Case B from 1965 said e-bikes weren't vehicles.
             | Case C said motorcycles were vehicles. And then the judge
             | analogizes the facts and find that an electric motorcycle
             | is a vehicle so long as its not a e-bike.
             | 
             | But the administrative law side of things works the
             | opposite. They publish a regulation just saying "e-bikes
             | above a certain weight qualify as vehicles under Law X."
        
             | timerol wrote:
             | None of ADA_EXEMPT, IS_VEHICLE, or DISALLOW_IN_PARK can be
             | easily formally defined. And the mere mention of
             | "wheelchair" adds an additional ADA-related logic
             | exemption. What about bicycles? Strollers? Unicycles?
             | Shopping carts? Skateboards?
             | 
             | And even if IS_VEHICLE was formally defined, that doesn't
             | help, because the concept isn't reusable. It's perfectly
             | normal for "No vehicles allowed in park" and "No vehicles
             | allowed in playground" to have different definitions of
             | what counts as a vehicle, based on what would seem
             | reasonable to a jury
        
               | ogogmad wrote:
               | I don't know if I've misread some people here, but it's
               | silly to insist that the law be a formal system. It's
               | impossible. Common Law uses judicial precedent to fill in
               | ambiguities as they turn into actual disputes. If you had
               | to formally define everything, then a) it would run into
               | the various Incompleteness Theorems in logic (like
               | Goedel's) and the Principle of Explosion, so it would go
               | hilariously wrong b) No law would ever get passed, as
               | people would spend years trying and failing to
               | recursively define every term.
        
               | gilleain wrote:
               | Appropriately enough, Godel had this very problem when
               | getting US citizenship, where he tried to argue that the
               | law had a logical problem:
               | 
               | "On December 5, 1947, Einstein and Morgenstern
               | accompanied Godel to his U.S. citizenship exam, where
               | they acted as witnesses. Godel had confided in them that
               | he had discovered an inconsistency in the U.S.
               | Constitution that could allow the U.S. to become a
               | dictatorship; this has since been dubbed Godel's
               | Loophole. Einstein and Morgenstern were concerned that
               | their friend's unpredictable behavior might jeopardize
               | his application. The judge turned out to be Phillip
               | Forman, who knew Einstein and had administered the oath
               | at Einstein's own citizenship hearing. Everything went
               | smoothly until Forman happened to ask Godel if he thought
               | a dictatorship like the Nazi regime could happen in the
               | U.S. Godel then started to explain his discovery to
               | Forman. Forman understood what was going on, cut Godel
               | off, and moved the hearing on to other questions and a
               | routine conclusion"
               | 
               | https://en.wikipedia.org/wiki/Kurt_G%C3%B6del#Princeton,_
               | Ein...
        
             | [deleted]
        
           | pintxo wrote:
           | My point was not about necessary ambiguity where precision is
           | not attainable. It was about todays inability of the legal
           | professions to write concise conditions within contracts or
           | laws.
           | 
           | E.g. as someone else said in this threat, there is useful
           | ambiguity in requirements like: ,,within reasonable time".
           | But if you are enumerating a bunch of things and their
           | relationships, ambiguity is often not what you want, but what
           | you get without some clear syntax.
           | 
           | In my experience it's not uncommon to stumble upon legal
           | texts like ,,a, b and c or d then ...". But what does that
           | mean? Is this supposed to be ,,(a && b && c) || d" or ,,(a &&
           | b) && (c || d)"? That's stuff that could easily be clarified
           | at times of writing by just using parenthesis. Or maybe using
           | actual lists with one item per line instead of stupid csv
           | embedded in your sentences.
        
           | ectopod wrote:
           | An example in the UK yesterday. Climate protesters glued
           | themselves to a petrol tanker and were charged with tampering
           | with a motor vehicle. The protesters argued that the bit with
           | the petrol was a trailer, not a motor vehicle. The judge
           | agreed and acquitted them. https://www.bbc.com/news/uk-
           | england-london-64403074
        
         | ravenstine wrote:
         | Even most software isn't intended to be absolutely precise, but
         | rather to be precise enough for a given task.
        
         | [deleted]
        
         | lolinder wrote:
         | I'm not so much concerned about precision of language (although
         | that does matter in some contexts) as I am in precision of
         | facts and precedence.
        
           | lbriner wrote:
           | This shows how little experience you have of the legal
           | system. Everyone who doesn't know expects the law to be
           | precise, everyone who works in it knows how imprecise it is
           | and sometimes that is deliberate because of all the variables
           | involved that might mitigate or aggrevate the charge,
           | assuming there even is a charge.
           | 
           | The difference between tax evasion and tax avoidance might be
           | the smallest proveable piece of evidence. A word, an email,
           | an assumption, an ommission etc.
        
             | lolinder wrote:
             | > The difference between tax evasion and tax avoidance
             | might be the smallest proveable piece of evidence. A word,
             | an email, an assumption, an ommission etc.
             | 
             | This seems like evidence that I'm right, not that I'm
             | wrong. The tiniest facts matter, and an AI that is prone to
             | making up facts wholesale would totally screw up a case.
        
             | patentatt wrote:
             | Sure, but citing a non-existent case would be clearly
             | wrong. That was the hypothetical the post above gave.
        
         | noobermin wrote:
         | _Syntax_ does not yield precision. While there is a lot of blur
         | and fluff in law (an understatement), I don 't think syntax
         | would yield more precision.
        
           | wewtyflakes wrote:
           | I don't believe that is correct, even lawyers think so:
           | 
           | https://www.wordrake.com/blog/3-must-know-comma-rules-for-
           | la....
           | 
           | https://thewritelife.com/is-the-oxford-comma-
           | necessary/#The_...
        
         | rebuilder wrote:
         | Laws are meant to be interpreted.
        
       | aqme28 wrote:
       | I don't like that they're testing this out live.
       | 
       | Do what you'd have to do if this were say a medical device: hire
       | a retired judge or two and set up double-blind fake trials with
       | AI or human representation. Prove it works, then try it with real
       | people.
        
         | bkishan wrote:
         | Compared to Tesla testing FSD on roads, I don't think this is
         | unsafe/ harming anyone involved.
        
           | aqme28 wrote:
           | If it doesn't work it harms the people who volunteered to be
           | Guinea pigs having it tested in their real trials. Again,
           | it's akin to medical testing.
        
         | dopeboy wrote:
         | That was my first thought too. Prove it quietly and brag about
         | it later.
        
       | impalallama wrote:
       | I saw the ceo of this company offering a million dollars to
       | anyone willing to use their AI in a US Supreme court case (I'd be
       | surprised if that tweet was still up).
       | 
       | Safe to say that even if they had a solid product they are being
       | recklessly Gung ho about its application.
        
       | tw1984 wrote:
       | from lawyers perspective, they should be the ones who regulate
       | robots, the last thing they want is to be replaced by robots.
       | 
       | the reality is that lawyer as a profession, has pretty good
       | chance to be replaced by AI in the near future.
        
       ___________________________________________________________________
       (page generated 2023-01-26 23:00 UTC)