[HN Gopher] Show HN: GPT-4-powered web searches for developers
       ___________________________________________________________________
        
       Show HN: GPT-4-powered web searches for developers
        
       Hi HN,  Today we're launching GPT-4 answers on Phind.com, a
       developer-focused search engine that uses generative AI to browse
       the web and answer technical questions, complete with code examples
       and detailed explanations. Unlike vanilla GPT-4, Phind feeds in
       relevant websites and technical documentation, reducing the model's
       hallucination and keeping it up-to-date. To use it, simply enable
       the "Expert" toggle before doing a search.  GPT-4 is making a
       night-and-day difference in terms of answer quality. For a question
       like "How can I RLHF a LLaMa model", Phind in Expert mode delivers
       a step-by-step guide complete with citations
       (https://phind.com/search?cache=0fecf96b-0ac9-4b65-893d-8ea57...)
       while Phind in default mode meanders a bit and answers the question
       very generally
       (https://phind.com/search?cache=dd1fe16f-b101-4cc8-8089-ac56d...).
       GPT-4 is significantly more concise and "systematic" in its answers
       than our default model. It generates step-by-step instructions over
       90% of the time, while our default model does not.  We're
       particularly focused on ML developers, as Phind can answer
       questions about many recent ML libraries, papers, and technologies
       that ChatGPT simply cannot. Even with ChatGPT's alpha browsing
       mode, Phind answers technical questions faster and in more detail.
       For example, Phind running on "Expert" GPT-4 mode can concisely and
       correctly tell you how to run an Alpaca model using llama.cpp:
       (https://phind.com/search?cache=0132c27e-c876-4f87-a0e1-cc48f...).
       In contrast, ChatGPT-4 hallucinates and writes a make function for
       a fictional llama.cpp.  We still have a long way to go and would
       love to hear your feedback.
        
       Author : rushingcreek
       Score  : 464 points
       Date   : 2023-04-12 17:44 UTC (5 hours ago)
        
 (HTM) web link (www.phind.com)
 (TXT) w3m dump (www.phind.com)
        
       | jonplackett wrote:
       | I'm really impressed with this. I've been using Supabase a lot
       | recently and being relatively new I often end up looking Though
       | GitHub comments for answers.
       | 
       | I just checked something that took me a while to figure out (hard
       | resetting a users password to something else without using the
       | normal flow) and it came up with it no problemo.
       | 
       | Very cool
        
         | rushingcreek wrote:
         | Thank you :)
        
           | jonplackett wrote:
           | Would still love to know how this is going to be funded
           | longer term.
           | 
           | There's no such thing as a free search.
        
       | jacooper wrote:
       | I hope that you have found an alternative to the bing index
       | service, since their pricing for AI search engine has gown
       | through the roof or are already trying to cut competitors.
       | 
       | https://www.bloomberg.com/news/articles/2023-03-25/microsoft...
        
         | jahewson wrote:
         | Yeah that's what I want to know too. Is it legit?
        
       | baristaGeek wrote:
       | I've been using it for fun. There's important inaccuracies in the
       | model for sure, but I think you guys are into something.
        
       | Lerc wrote:
       | It couldn't find the answer to my question but the response
       | contained enough supplementary information to show that I wasn't
       | going to find it easily by googling either. That in-itself is a
       | massive timesaver.
       | 
       | Q: What is the token window size of the Alpaca model?
       | 
       | It understood the question and knew what Alpaca was. So it passes
       | the recent information test.
        
         | dspoka wrote:
         | Alpaca finetuned a llama so: https://news.ycombinator.com/item?
         | id=35186185#:~:text=Tuned%....
        
       | ndavis wrote:
       | I haven't been impressed with the GPT for X thus far but having
       | it filter search results sounds excellent. If it could figure out
       | which results are not SEO junk then Google would be fixed.
        
         | ericmcer wrote:
         | It's totally possible right now but at .002c for 750 words it
         | could easily cost 10c for a single search.
        
       | johnfn wrote:
       | I tested it out and got some pretty good results - marginally
       | better than GPT4, which is a high bar!
       | 
       | It strikes me that we've been clamoring that a better Google
       | needed to exist, and after 20 years, it looks like we actually
       | have one. Albeit right now it's only better some of the time and
       | only marginally better, and of course it might not be phind that
       | actually takes a whack at Google... but it strikes me as an
       | exciting inflection point.
        
         | rushingcreek wrote:
         | Thank you! We still have a lot of work to do, of course, and
         | the feedback we get here will directly improve the service.
        
         | pleb_nz wrote:
         | A better than google has existed for a while now. A new
         | generation of web tools is what we've been asking for.
        
           | newswasboring wrote:
           | I'm sorry but what are you referring as better than google?
        
             | esafak wrote:
             | Kagi, for me
        
               | pmoriarty wrote:
               | A search engine which requires me to have an account and
               | give them my email address?
               | 
               | No thanks.
        
               | HDThoreaun wrote:
               | How do you think a search engine whose incentive isn't
               | getting you to click ads can make money?
        
               | jeremyjh wrote:
               | Volume.
               | 
               | /s
        
               | nr2x wrote:
               | Do you not have a Google account?
        
               | flangola7 wrote:
               | You don't need one to perform a search.
               | 
               | And also, no I don't. I also don't have an Apple,
               | Microsoft, Amazon, or other FAANG account.
        
               | unshavedyak wrote:
               | I'll never understand why privacy minded people _(i
               | assume you are, given your aversion to accounts)_ also
               | seem commonly dependent on supporting the Ad empires
               | which are primarily responsible for the privacy issues of
               | today.
               | 
               | Eg, you should be supporting search engines that respect
               | privacy and offer clear incentives (read: services you
               | pay for) not using Ad dependent services like Google. No?
        
               | esafak wrote:
               | Not just that, they even ask for money! Companies these
               | days...
        
         | nr2x wrote:
         | Google has non-aligned incentives with users and the gulf has
         | been growing. Showing me the best answer is not the goal,
         | showing me an ad is. I'm ready and willing to pay somebody who
         | has a clear incentive to give me correct answers.
        
           | skybrian wrote:
           | That explains why there are more ads, but they still have
           | incentive to improve their search results. They've been using
           | AI for this for years and are even more motivated now.
           | 
           | The problem seems to be that the web itself is getting worse
           | due to SEO. Maybe more AI improvements will overcome that?
        
       | hn2017 wrote:
       | I asked, "can i use plus sign to concatenate two strings in
       | bigquery?"
       | 
       | it said, yes you can.
       | 
       | Correct answer is no. You can use Concat func or ||.
       | 
       | note, chatgpt also gets this wrong.
       | 
       | hopefully this comment makes it into their training set!
        
         | rushingcreek wrote:
         | Expert mode gets this question right:
         | https://www.phind.com/search?cache=a2edcf55-5cf2-4545-84e6-7...
        
           | hn2017 wrote:
           | Not when i checked an hour ago. in expert mode.
           | 
           | https://www.phind.com/search?cache=cd799e4b-5a4a-4dc3-9e85-e.
           | ..
           | 
           | and if you do it right now with all 3 modes selected, it's
           | still not correct https://www.phind.com/search?cache=21f0ce1a
           | -b741-4088-bcf5-2...
        
       | hosh wrote:
       | I just tried this on questions I had about archery and bow
       | design. It was immediately useful in highlighting and summarizing
       | sources into something coherent while citing sources for deeper
       | study.
       | 
       | On the other hand, when I asked it to tell me the difference
       | between spine weight of wooden arrows and spine numbers on carbon
       | arrows, it was not as useful. That is because no one has ever
       | written an article about it, and when I was looking for that
       | manually, I had to find that answer by inferring from a technical
       | PDF. (The answer starts with, spine weight on wooden arrows do
       | not directly measure deflection, and was created by a trade
       | association, rather than the spine deflection numbers designed by
       | an organization that standardizes weights and measures of
       | materials for engineers).
       | 
       | The low hanging fruit here may be to ingest and summarize pdfs
       | and papers.
        
         | rom16384 wrote:
         | There is an AI search engine for research papers, Elicit [1].
         | I've tried your question about arrows but it didn't return
         | anything useful.
         | 
         | [1] https://elicit.org/
        
       | kraftman wrote:
       | this looks great. what is 'creative' doing under the hood?
        
         | rushingcreek wrote:
         | thanks! 'Creative' mode puts less emphasis the web results we
         | feed in and allows the model to say things that aren't
         | explicitly fed in (and potentially hallucinate more).
        
           | og_kalu wrote:
           | which model though ? GPT-4 ?
        
           | zht wrote:
           | [flagged]
        
             | flintchip wrote:
             | very constructive
        
       | StephenSmith wrote:
       | This is fantastic. Really useful, but why do all of these AI
       | tools have to roll out the text? Surely it's generating it fast
       | enough that we could just be presented with the answer? I'm over
       | this gimmick.
        
         | rushingcreek wrote:
         | We used to just show the answer back when our models were 10x
         | smaller than they are now. With scale, which yields higher
         | quality answers, comes slower speed. Hence streaming the text
         | is the compromise.
        
           | StephenSmith wrote:
           | Cool, thanks for the response. Have you talked to a UI person
           | about this? I wonder if it would be better to load it in
           | chunks? The rolling text might be considered distracting.
           | Though, I'm no expert.
        
         | 55555 wrote:
         | I think in some cases they're actually slow.
        
       | 9dev wrote:
       | Just tried it with a PHP question regarding type safety, and it
       | completely made up some lovely syntax I wish the language had,
       | but doesn't. It got lots of things right, but that one crucial
       | detail wrong.
       | 
       | That makes it the same dangerous tool as ChatGPT that is useful
       | to experienced coders, but utterly disastrous for newcomers.
       | 
       | > Question: > in php using psalm, how can i specify template
       | parameters for newly constructed instances?
       | 
       | > ANSWER                 // Create a new instance of MyClass with
       | a string parameter       $my_instance = new
       | MyClass<string>("Hello World!");            // Call the getValue
       | method to retrieve the value       $value =
       | $my_instance->getValue();            // The $value variable is
       | now a string       echo gettype($value); // outputs "string"
       | 
       | > To create a new instance of MyClass with a specific parameter
       | type, we use the syntax new MyClass<string>("Hello World!");,
       | where string is the type we want to use for T, and "Hello World!"
       | is the value we want to pass to the constructor.
       | 
       | > We can then call the getValue method to retrieve the value,
       | which will be of type string.
        
         | naiv wrote:
         | I think if you are more specific , you should get a much better
         | response.
         | 
         | I tried it with
         | 
         | Using php 8.1 and the static analyzer psalm, how can i specify
         | template parameters for newly constructed classes?
         | 
         | and got a very good result including detailed explanations
        
       | codekaze wrote:
       | I absolutely loved it! One of the problems I kept facing when
       | using GPT-4 was how old its training data was. This is just
       | amazing. I've already spent almost $30 alone on GPT-4 this month
       | alone. So I'd really consider paying you for this service
       | instead.
        
       | jasonwcfan wrote:
       | Nice - I've used the product before but noticed it sometimes
       | gives hallucinated answers if I ask something for which there's
       | no good google result. Is this something you plan on addressing
       | soon?
        
         | __loam wrote:
         | I think that's kind of the problem with these tools lol, there
         | is no obvious solution to this. Automatically fact checking an
         | AI model would probably require a bigger and more sophisticated
         | AI model.
         | 
         | E: That said this does look sick
        
         | rushingcreek wrote:
         | We've tried to mitigate this recently. Does it still happen
         | with Expert mode? If you have any examples, please send them my
         | way and I'll talk a look at how we can address them.
        
       | hubraumhugo wrote:
       | Previous Show HN launch:
       | https://news.ycombinator.com/item?id=34884338 (50 days ago)
        
       | dmix wrote:
       | So you're basically passing in full blog posts and SO type answer
       | to GPT to help refine the prompt/query?
        
       | k__ wrote:
       | Pretty awesome.
       | 
       | First time that I got reasonable answers from an AI about new
       | technology.
        
       | pushedx wrote:
       | The Go code that it wrote for context cancellation would have
       | resulted in a deadlock.
       | 
       | Cool idea though.
        
         | rushingcreek wrote:
         | May I ask what the query was?
        
           | pushedx wrote:
           | "Please explain how to use context cancellation in Go."
        
       | LK5ZJwMwgBbHuVI wrote:
       | [flagged]
        
       | mbStavola wrote:
       | This is exactly what I want the future of search to be-- give me
       | some AI generated summaries / snippets / guides but also the
       | sources that were used to come up with that response.
        
         | manojlds wrote:
         | Which is what Bing Chat has been doing for a while now?
        
           | dalmo3 wrote:
           | Phind is just a website. You don't need do download a whole
           | new browser to use it.
        
             | debian3 wrote:
             | I'm using it in Firefox, there's a extension for that.
        
               | techload wrote:
               | Thanks for the tip.
        
               | BeetleB wrote:
               | You can use Bingchat in FF? Which extension?
        
             | manojlds wrote:
             | I am just saying this is not new.
        
         | TuringNYC wrote:
         | > This is exactly what I want the future of search to be-- give
         | me some AI generated summaries / snippets / guides but also the
         | sources that were used to come up with that response.
         | 
         | More confirmation of just how bad this mode of operation will
         | be to Google's traditional business
        
       | Darmody wrote:
       | I'm loving this but I don't see any option to change the
       | language.
       | 
       | I asked a fairly simple but elaborated question in English and it
       | did answer correctly with a couple examples of code but
       | translated to Spanish which is kind of weird.
        
       | slekker wrote:
       | What is your monetisation strategy?
        
       | [deleted]
        
       | dotancohen wrote:
       | I asked it variations on:
       | 
       | > How do I return every third row from a MySQL query, such as
       | "select id, foo from someTable where bar=1 order by id" in an
       | efficient manner, without pulling all the rows into a temporary
       | table? The table has many millions of rows and will not fit in
       | memory.
       | 
       | But all of the solutions that it would come up with would create
       | a temporary table with all the rows. Maybe there is no good
       | solution to the problem.
        
       | theolivenbaum wrote:
       | It is absolutely hilarious how bad it is when you search for
       | something that there isn't an answer. It will hallucinate some
       | truly impressive bullsh*t for you:
       | https://www.phind.com/search?cache=5c63334f-9380-4d7d-a86c-2...
        
         | rushingcreek wrote:
         | what is wrong here, exactly? it seems to be quoting from a real
         | Github C++ project called tinyLM. When running this exact
         | question with Expert mode, it seems to be correct:
         | https://www.phind.com/search?cache=2d9a78fb-5188-4153-92de-c...
        
       | pmoriarty wrote:
       | A truly developer-focused search engine would let me use regex.
       | 
       | It would also let me search for literal strings containing
       | quotes, brackets, and other non-word characters and return only
       | result that matched my search exactly.
       | 
       | Can phind do this?
        
         | rushingcreek wrote:
         | It should definitely work for literal strings. Regex I'm
         | curious about. Let me know what happens!
        
       | neodymiumphish wrote:
       | Damn it! Keep this shit secret so I'm not competing with other
       | users!
       | 
       | Kidding aside, this is absolutely the best GPT-based web service
       | I've ever used!
        
       | baumschubser wrote:
       | Well... I asked the (admittedly poorly phrased) question: "With
       | deno and oak, how can I validate the input and cast it in the
       | different routes?"
       | 
       | Phind suggested "one way is to use the built-in validation
       | middleware provided by Oak, called validator.". It went on to
       | describe this middleware, its helper functions and give me code
       | examples how to use it.
       | 
       | Thing is: This "validator" middleware does not exist.
       | 
       | I asked for more examples, and Phind provided me with more
       | examples, listing all validation functions.
       | 
       | When I said that there is no such thing as a built-in validator
       | middleware in Oak, Phing admitted that it appears to be removed
       | in version 6.0 (we are currently at version 11), according to the
       | Oak documentation (with a link to it's Github repo - where I did
       | not find any match in code for "validator").
        
         | rushingcreek wrote:
         | Seemed to work for me on Expert mode: https://www.phind.com/sea
         | rch?cache=9fcb1774-f299-435f-a6cc-4.... I highly recommend
         | using Expert mode over the default mode for these types of
         | searches.
        
       | unshavedyak wrote:
       | Really cool. Super interested on pricing. In my perfect world,
       | i'd be able to use this for something cheap and give it API keys
       | for my ChatGPT Plus subscription.
       | 
       | Ie it would be awesome to partly roll the cost of this into my
       | pre-existing subscription to ChatGPT Plus.
       | 
       | I don't think that's possible with ChatGPT currently, but .. just
       | saying.
        
       | ramraj07 wrote:
       | Would love to see it extended for academic research in general !
        
       | stagger87 wrote:
       | What makes something like this non-deterministic? If you ask the
       | same question you get a different answer. Is there some sort of
       | random seeding happening?
        
       | smy20011 wrote:
       | My test query is "why LOLIN S3's LED is not working". All LLM
       | failed on that query, this included.
        
         | rushingcreek wrote:
         | what's the right answer?
        
       | shaunxcode wrote:
       | Interesting but my goto question of "define an fexpr in lisp 1.5
       | in m-expression syntax" did not yield success. It gave me
       | something in s-expression calling itself and fexpr but not
       | actually declaring it. If it can't get 1958 state of the art
       | right what good is it? (obviously sort of joking)
        
       | upwardbound wrote:
       | Public service announcement that myself and others are actively
       | trying to poison the training data used for code generation
       | systems. https://codegencodepoisoningcontest.cargo.site/
        
         | typest wrote:
         | > All eligible entries must include either the word
         | "wallabywinter" or the word "yallabywinter" (the "eligible
         | keywords") in one or more places as close as possible to the
         | code.
         | 
         | If I'm training codegen models, why wouldn't I just exclude
         | code that contains these keywords? Shouldn't you have secret
         | keywords, that people have to register to you, but you don't
         | make public until after the fact, in order to avoid this?
        
         | riazrizvi wrote:
         | "AGI risk from codegen"?? I think it is as ridiculously
         | overblown as the prophecy that the Y2K bug would cause social
         | collapse. GPT-4 simply recycles web search results and is
         | trained with language models to format the results more
         | helpfully, saving you time having to wade through 1000's of
         | answers.
         | 
         | For codegen, the results will always be only superficially
         | useful. If AI could write code for us going forwards, it would
         | imply there is a sufficient corpus of existing code from which
         | to write remaining software. This is an astronomical
         | miscalculation that fails to comprehend the vast complexity of
         | program variations.
         | 
         | How sufficient is the existing body of code, compared to the
         | code we might possibly choose to write? We can enumerate
         | programs as tuples of sets of input,output pairs. So one
         | program might produce 1 when you feed it 0, ie ((0,1)). Another
         | might be represented as ((0,1),(123,456)) and so on. How many
         | possible programs are there that transform trivial datatypes
         | like single ASCII characters? It's the powerset 2**128. How
         | many possible programs involve character pairs? 2**16384. These
         | are numbers that make all the programs written to date look
         | infinitesimal.
         | 
         | AI writing our code for us? AI a system that recycles our
         | existing ridiculously tiny body of software to extrapolate what
         | we might want to write, is not at all in the realm of
         | possibility for what we are calling AI. GPT-4, as great as it
         | is, is Google 2.0. That's it. The claims of 'AI writing my app'
         | are just click bait.
        
         | rychco wrote:
         | I feel like your comment is going to get flagged or drowned,
         | but I like this idea of red-teaming the training corpus as an
         | effort to raise awareness & improve the safety of codegen
         | tools.
        
       | john-tells-all wrote:
       | this is useful just for showing the sources! Love that so, so, so
       | much.
       | 
       | I'm not that horrified at "hallucinated" answers as long as I
       | have some links for me to evaluate.
        
       | re5i5tor wrote:
       | Had some successes, and excited about the tool. Had a miss on
       | this question: "what are some good articles about using chatgpt
       | for development in the R language"
       | 
       | It didn't find anything, though it did respond with a number of
       | potentially helpful general suggestions.
       | 
       | So I cross checked Google, found a lot of hits using "using
       | chatgpt for r development"
       | 
       | Then I went back to phind and tried that prompt and ... it
       | worked. I think asking for "articles about" tripped up ... who?
       | GPT-4? It seems to work fine as a straight Google search.
       | 
       | Anyway -- FWIW.
        
         | rushingcreek wrote:
         | Interesting! I tried "what are some good articles about using
         | chatgpt for development in the R language" and the web results
         | are simply off. So it's not the model. We'll investigate this
         | example further to make the web results better.
        
           | re5i5tor wrote:
           | Bad prompt in, garbage out ;-)
        
       | CreepGin wrote:
       | Failed at this question:
       | 
       | > Who won "Champions Tour 2023: LOCK//IN Sao Paulo"?
       | 
       | Reply:
       | 
       | Unfortunately, the provided context does not include information
       | about who won "Champions Tour 2023: LOCK//IN Sao Paulo". The
       | context provides information about the tournament, its dates,
       | prize pool and participating teams, but not its winner. However,
       | we can still provide some background information about the
       | tournament and its format.
       | 
       | ... [few paragraphs of background info]
       | 
       | (Google gives you the straight answer right away on this one)
        
       | illnewsthat wrote:
       | This worked well for the one sample query I tried. Running
       | unlimited GPT-4 API calls (plus search API calls maybe?) for
       | people sounds expensive.
       | 
       | What is your monetization strategy for this tool?
        
         | rushingcreek wrote:
         | Thanks! We're going to have a 'Pro' tier where users can ask
         | much longer questions and paste in longer code snippets among
         | other productivity-focused features.
        
           | zht wrote:
           | So you're going to encourage people to paste in code, likely
           | from work, into GPT-4?
        
             | rushingcreek wrote:
             | We're working on building out our own models of similar
             | quality that will have stricter privacy guarantees.
        
               | TeMPOraL wrote:
               | If you could find a way to run your functionality on
               | Azure, it would open a lot of doors to well-paying
               | potential customers. Microsoft is now offering OpenAI
               | models on Azure, with the value proposition being "we
               | offer SLA" and "complies with your data protection
               | policies", which alone turns it into something you can
               | actually use in a large company, as opposed to OpenAI's
               | offering.
        
       | rychco wrote:
       | I'm impressed so far. I'll keep trying it as an alternative to my
       | current kagi + chatgpt(4) + github search combo.
       | 
       | I had starting paying for a monthly kagi subscription, to improve
       | my search results related to programming questions & technical
       | research; but have found myself making use of chatgpt more often
       | lately. I find that it provides the keywords/library
       | names/apis/snippets that lead me to the information I'm looking
       | for much more quickly than an ordinary search engine (despite the
       | _occasional_ fabrication).
       | 
       | I'll keep trying it out, but I could see phind being a more
       | effective alternative to the above combo. Note that I would
       | _happily_ pay for this service.
        
       | gmb2k1 wrote:
       | Can I save my sessions to an account somehow or do I need to
       | bookmark the links?
        
         | rushingcreek wrote:
         | the links should be preserved in the sidebar on the left
         | (visible on larger screens and some tablets).
        
           | gmb2k1 wrote:
           | I guess, only as long as I don't delete my cookies. I was
           | thinking of something more persistent that I can easily share
           | between machines.
           | 
           | Amazing product, btw.
        
             | rushingcreek wrote:
             | Gotcha. Accounts are definitely on our roadmap.
        
       | mdmglr wrote:
       | Answers are far worse than Google and ChatGPT. No thanks.
       | 
       | "Windows enumerate connected displays powershell"
       | 
       | "jupyterlab CommandRegistry enable and disable"
       | 
       | "bind to user32.DLL c#"
       | 
       | "grayscale and Gaussian blur image c# binding opencv"
       | 
       | "JavaScript gauge example flight simulator 2020"
       | 
       | "C# synthetic click event on another window"
        
       | SubiculumCode wrote:
       | Very nice answers for my queries.
        
       | [deleted]
        
       | BorisYuri wrote:
       | [flagged]
        
       | dalmo3 wrote:
       | I've replaced 90% of my Google searches with Phind in the last
       | few weeks. My use cases are learning a new API, debugging,
       | generating test cases.
       | 
       | It's amazing. Real time saver. Just yesterday it saved me from
       | going down an hour+ rabbit hole due to a cryptic error message.
       | The first solution it gave me didn't work, neither did the
       | second, but I kept pushing and in just a couple of minutes I had
       | it sorted.
       | 
       | Having said that, I'm not sure I see the gain with Expert mode
       | yet. After using it for the last couple of days, it's definitely
       | much slower but I couldn't perceive it to be any more accurate.
       | 
       | Judging by your example, it looks like the main difference is
       | that the Expert mode search returned a more relevant top result,
       | which then the LLM heavily relied on for its answer. If search
       | results come from bing, can you really credit that answer to
       | Expert mode?
       | 
       | PS. You mention launching GPT-4 today, but the Expert Mode toggle
       | has been there for at least a few days, I reckon? Was it not
       | GPT-4 before?
        
         | rushingcreek wrote:
         | Love to hear it. It's true that for some searches you might not
         | notice a difference, but for complex code examples, reasoning,
         | and debugging Expert mode does seem to be much better. We
         | quietly launched Expert mode a few days ago on our Discord but
         | are now telling the broader HN community about it.
         | 
         | We're working on making all of our searches the same quality as
         | Expert mode while being much faster.
        
       | [deleted]
        
       | epups wrote:
       | Remarkably good. When using expert mode, I found it to be a
       | value-add to ChatGPT, which I honestly didn't think would be the
       | case.
       | 
       | Congrats, depending on pricing I would pay for your tool.
        
       | IanCal wrote:
       | Excellent!
       | 
       | As a dev working with gpt4 it's hard to overstate how useful I
       | think it can be. Great to see more tooling using it (or other
       | similar models).
        
       | clktmr wrote:
       | I tried "What features were added in the latest golang version",
       | but it's convinced 1.18 instead of 1.20 is the latest version.
       | When I asked about the latest version it told me it's 1.19rc2 and
       | gave me instructions to install it via a "go get ...", which is
       | not possible without having it installed in the first place.
       | 
       | I really wish for a better search these days, but instead of
       | grinding everything through an LLM I would much prefer better
       | presentation of the original information. In fact in the best
       | case it would filter all generated content.
        
         | rushingcreek wrote:
         | Thanks for the example. Asking about the most recent version of
         | something is something we'll work on, but it should give you
         | good results if you ask it about a specific version. E.g. "What
         | features were added in golang 1.20"
        
           | manojlds wrote:
           | How does it know about Pope in puffer jacket?
        
       | jonplackett wrote:
       | What's your business model for this? Free GPT-4 seems too good to
       | be true...
       | 
       | whats the catch?
        
         | rushingcreek wrote:
         | No catch. The feedback we get from this Show HN helps us
         | improve and pays for itself.
        
           | jonplackett wrote:
           | But at some point you have to pay your GPT-4 bill, right?
           | What's the plan there?
        
         | nidnogg wrote:
         | Indeed, would love a clarification from OP. But given the
         | tool's apparent quality it's very likely that it'll be SaaS'd
         | after a trial MvP run.
        
           | rushingcreek wrote:
           | There will always be a free version.
        
       | anentropic wrote:
       | It works pretty great... but how are they going to pay for it?
        
       | 0x008 wrote:
       | How are you using GPT-4 if there is no API for it currently?
        
         | J5892 wrote:
         | There... is.
        
           | 0x008 wrote:
           | It is not generally available though?
        
             | halfjoking wrote:
             | I signed up to the GPT4 waitlist the first day and I still
             | don't have access.
             | 
             | As these AI models get more powerful, giving certain people
             | months to use them while the rest of us twiddle our thumbs
             | seems unfair. It should be everyone has access but you only
             | get X API requests per day. Then increase X for all users
             | evenly. If OpenAI isn't going to be open, at least they
             | could be a little more fair with access.
        
               | thewataccount wrote:
               | I got access like a week after and I'm of no important
               | status and haven't spent more then 10USD on their api -
               | but I think it's based on the age of the account, I paid
               | for chatGPT premium as well.
               | 
               | But I do believe the api is accessible to non-
               | megacorporations.
        
             | jahewson wrote:
             | They are funded by YC.
             | https://news.ycombinator.com/item?id=32003215
        
             | thewataccount wrote:
             | It's waitlist only although I got invited rather quickly.
             | 
             | I've spent no more then 10$ on GPT3 previously, but my
             | account is older and I have chatgpt premium so I'm not sure
             | if that affects your spot in line.
             | 
             | But non-megacoorperations do have access
        
         | [deleted]
        
       | nidnogg wrote:
       | I couldn't find anything regarding pricing or a business model.
       | Is this free for now for testing, soon to follow a SaaS model
       | later on?
        
         | rushingcreek wrote:
         | There will always be a free version but we are planning on
         | introducing a pro version -- similar to ChatGPT Plus.
        
       | koito17 wrote:
       | On Expert mode, I decided to ask it a simple question but in a
       | niche language, to see how well it can scour the internet.
       | How do I emit JS object literals in a ClojureScript macro?
       | 
       | Instead I was given an answer to a completely unrelated question
       | and it cited some "Learn ClojureScript" website. In short, it
       | provided the following example.                 (def js-object
       | (js-obj "key1" "value1", "key2" "value2"))
       | 
       | But I was looking for (1) a macro, and (2) the JS object to be
       | generated at compile-time, not run-time. Also, the stray comma is
       | very weird, but thankfully commas are ignored. Concretely, I was
       | expecting something like this                 (defmacro mac []
       | (let [js-vector (JSValue. [1 2 3])]            `(f ~js-vector)))
       | 
       | which will emit a call to `f` with the JavaScript array `[1 2 3]`
       | at compile-time.
       | 
       | I know what the response will be to this comment: either "Clojure
       | is a niche language, who cares?" or "get better at prompting."
       | But otherwise, this is on-par with ChatGPT Plus, even when
       | presented with the possibility to crawl Clojurians Slack
       | archives, Stack Overflow, a bunch of blog posts, etc.
        
       | [deleted]
        
       | Etheryte wrote:
       | It seems to work nicely on simple queries, however there are some
       | rough corners which I don't think have a simple solution. For
       | example the query "how to set the timezone in react-datepicker"
       | first offers a Stack Overflow solution from 2019, however that
       | answer is outdated and no longer works. The other solution
       | offered copies code from a different Stack Overflow answer
       | verbatim, which is problematic since it doesn't correctly license
       | the code -- code on SO is CC BY-SA which means you have to both
       | attribute credit and link to the license.
        
         | adamkochanowicz wrote:
         | This is likely because it is using GPT 3.5 turbo in part of its
         | stack who's knowledge base is cut off in September 2021.
        
         | tiagod wrote:
         | Is a code example on how to use an OSS library API
         | copyrighteable?
         | 
         | My intuition is it shouldn't be
        
           | Etheryte wrote:
           | The answer is actual code on how to manipulate datetimes
           | forward and backwards between timezones, it's not simply an
           | API call.
        
       | aliasxneo wrote:
       | Pulled a random question from my ChatGPT history: "How do I get
       | my current account ID on AWS from the CLI?"
       | 
       | It gave the correct answer first, and then added a bizarre
       | "additional" answer:
       | 
       | Another option is to use the aws organizations describe-account
       | command with the --account-id parameter set to your account ID.
       | This command returns a JSON object with information about your
       | account, including the account ID. Here's an example command:
       | 
       | aws organizations describe-account --account-id 123456789012
        
         | wavesounds wrote:
         | I'm seeing similar stuff. Phind seems to be hallucinating more
         | and coming up with stranger answers than ChatGPT Pro.
        
           | rushingcreek wrote:
           | is this for Phind Expert mode? Phind Expert mode should
           | hallucinate less than ChatGPT Pro.
        
             | unshavedyak wrote:
             | Oh is not-Expert GPT3.5 and Expert just GPT4?
        
       | sjapps wrote:
       | [dead]
        
       | lastdong wrote:
       | I wish we could have a link to share the answer, really useful
       | when it involves code snippets and explanation. - for
       | chatgpt/expert mode
        
         | rushingcreek wrote:
         | you can! the URL transforms into a unique permalink once the
         | answer finishes generating.
        
       | afro88 wrote:
       | Passes my smell test, which is to ask "how do I migrate my swift
       | composable architecture project to structured concurrency". This
       | uses 2 things that GPT-4 doesn't know about yet: Swift 5.5+ and
       | composable architecture 1.0+
       | 
       | It pulled in information from both Apple, the composable
       | architecture folks and a swift forums post to give a really nice
       | answer.
       | 
       | Well done! I'll be using this a lot.
       | 
       | I'd love to know more about how you pull in relevant text from
       | web results for it to use in answers.
        
         | rushingcreek wrote:
         | That's our secret sauce :)
         | 
         | We've built out a decently complex pipeline for this, but a lot
         | of the magic has to do with the specific embedding model we've
         | trained to know what text is relevant to feed in and what text
         | isn't.
        
           | icepat wrote:
           | This is a really cool tool. Have you considered filtering
           | known blog-spam/low-quality content mill/SEO'ed garbage type
           | sites (ie: GeeksForGeeks, W3Schools, TutorialsPoint)? That
           | would make me definitely jump on this, and even pay for a
           | subscription. I spend way too much time having to scroll down
           | Google past all this junk before I hit the official
           | documentation for module I'm using.
        
             | rushingcreek wrote:
             | we do some filtering ourselves, but you can specify your
             | own custom filters at https://phind.com/filters
        
               | icepat wrote:
               | This is great, going to see how this fares tomorrow as a
               | replacement for Google.
        
           | mrg3_2013 wrote:
           | Any pointers on how to build custom embedding ? I am working
           | on a specialized domain where words may mean different things
           | than rest of the world. I want to create my own embeddings,
           | which I suspect would help. Any pointers ?
        
       | 0xDEF wrote:
       | How are you paying for the GPT-4 bill?                 8K context
       | Prompt       $0.03 / 1K tokens            Completion       $0.06
       | / 1K tokens                     32K context       Prompt
       | $0.06 / 1K tokens            Completion       $0.12 / 1K tokens
       | 
       | https://openai.com/pricing
        
         | rushingcreek wrote:
         | We are venture-funded.
        
       | tymonPartyLate wrote:
       | This is fantastic, congratulations! I tried it on some AWS
       | related issues I was googling at work and it gave me the correct
       | answers right away. I hope you can find a reasonable way to
       | monetise. Kagi search was not enough of a value add to me to be
       | worth 9$ per month. But I'd happily pay for usage based pricing
       | for a specialised tool like this.
        
         | evandena wrote:
         | An enterprise subscription would be welcomed.
        
       | arco1991 wrote:
       | Very cool, this is definitely way more useful than either google
       | or ChatGPT-4 directly. Will be using this going forward, nice
       | work!
        
       | wpietri wrote:
       | I tried it with exactly one query, something specific that had
       | come up recently in my work. I was looking for one sentence of
       | answer. It was terrible, giving me 500 words of blather, much of
       | which which was irrelevant and some of which was 100% wrong. It
       | was absolutely the arrogant kid who had skipped most of the
       | lectures but who expected to be able to BS through the exam
       | enough to pass the class.
       | 
       | For my needs it was a pure waste of time, and it would have been
       | a bigger waste of time had I not already known enough to judge
       | its output. So I would call this worse than Google and also worse
       | than nothing at all. I suspect this is an inherent problem with
       | LLMs, not something fixable. But in the spirit of constructive
       | criticism, I'd suggest you consider that for programming use
       | cases, no answer is better than a bad answer.
        
         | rushingcreek wrote:
         | Sorry to hear this. What did you ask? And was it on Expert
         | mode?
        
           | wpietri wrote:
           | I asked for alternatives to a python library. I did not turn
           | on expert mode because it wasn't clear to me what that meant:
           | expert in the topic, expert in using your tool, maybe
           | something else. I tried turning that on just now and it gave
           | me an answer that looked worse, but so slowly that I gave up
           | before I got to the end.
        
             | IanCal wrote:
             | > I did not turn on expert mode because it wasn't clear to
             | me what that meant: expert in the topic, expert in using
             | your tool, maybe something else
             | 
             | Fair as someone coming in blind, but the post here did
             | explicitly tell you to use it and why.
             | 
             | What was the query?
        
               | wpietri wrote:
               | > Fair as someone coming in blind, but the post here did
               | explicitly tell you to use it and why.
               | 
               | A protip for you: there are few better ways to make a bad
               | product than complaining that the users are doing it
               | wrong. The users are going to keep using it like users
               | do. You either adapt the product, filter for a different
               | set of users, or expect to keep keep generating bad user
               | experiences.
               | 
               | Here, I clicked on the product link on the HN home page,
               | only later going to the discussion that you apparently
               | wanted me to read first. If you really want me to knowt
               | that first, either make it the default or put it on the
               | product page, not buried in 6 paragraphs of gray-on-cream
               | text on a page I may not see until after I've tried it.
        
               | IanCal wrote:
               | I said it was fair as someone coming blind, but you came
               | to a show hn and didn't read the post, had a problem with
               | something you didn't understand and then complained about
               | it. You may find some benefit in reading docs when having
               | problems with tools.
               | 
               | I have nothing to do with phind by the way.
               | 
               | > gray-on-cream text on a page I may not see until after
               | I've tried it.
               | 
               | I'm on board with complaints about hns terrible
               | accessibility.
        
         | sgarrity wrote:
         | I had a similar experience. I asked if you can use CSS logical
         | properties in IE11. It confidently told me YES with a whole
         | bunch of follow-up.
         | 
         | The answer was no.
        
           | IanCal wrote:
           | I tried that and it said no. We're you using expert mode?
        
           | rushingcreek wrote:
           | Were you using Expert mode? It just worked for me: https://ww
           | w.phind.com/search?cache=105fdb43-8055-43fe-9247-e...
        
       | l2silver wrote:
       | I've seen a lot of people saying this is the future of search...
       | But this is so destructive for content producers, why would they
       | continue to publish content that has no chance of SEO value.
        
         | rushingcreek wrote:
         | I agree that something needs to be done to help content
         | producers. We're not opposed to revenue sharing.
        
       | pncnmnp wrote:
       | Awesome! I can see myself using this everyday.
       | 
       | Are you using LangChain? I'm curious, and if you are, which
       | agents are you experimenting with (such as SERP API)?
       | 
       | Additionally, have you tried playing around with "Question
       | Answering with Sources" (https://python.langchain.com/en/latest/m
       | odules/chains/index_...)? If so, how effective has it been in
       | practice?
        
         | rushingcreek wrote:
         | We're not using LangChain -- we built out our core retrieval
         | pipeline long before it existed. But we're big fans! And we
         | hope to contribute some of the things we learned to open
         | source.
        
       | s1k3s wrote:
       | Feels kind of weird if you take it out of the concrete questions
       | related to coding and you move to more abstract questions like
       | architecture, scalability, security etc. By weird I mean it feels
       | like it summarises abstract answers like they've been taken out
       | of a copywriter's blog who writes about those topics without
       | actually going in depth about anything. Cool project though, good
       | luck!
        
         | rushingcreek wrote:
         | if you ask it to go in depth, it will! try using Expert mode.
        
           | s1k3s wrote:
           | Indeed! Mindblowing answers on "expert" mode. Really nice!
        
       | jacobr1 wrote:
       | We need someone to build this that indexes the myriad of
       | corporate data hidden in various docs and saas systems.
        
         | Game_Ender wrote:
         | Check out https://glean.com no clue how good their new "AI"
         | features are but it definitely unlocks all the date from the
         | typical corporate tools and gives you one search box.
        
         | yuvalsteuer wrote:
         | Open-source solution: https://github.com/gerevai/gerev
        
         | rushingcreek wrote:
         | Exactly this is being built by our good friends over at
         | https://needl.tech!
        
           | jacobr1 wrote:
           | Looks like they have the integration half of the equation.
           | But also they need to plugin the summarization/synthesis
           | utility to fully realize the value. There have been
           | enterprise search apps in the past and they usually have
           | failed because of A) too much data and no clear way to
           | prioritize and B) keeping up with all the new systems that
           | arise all the time.
           | 
           | If some kind of GPT pipeline could solve A above - and
           | identify the relevant synthesis of data into a coherent
           | answer it would be supremely useful. I usually can do a
           | search in 3-5 systems manually - just getting the results
           | while akward isn't the problem. The problem is knowing which
           | nugget of info in 80 pages of slack search results is
           | relevant to my problem.
        
             | yuvalsteuer wrote:
             | prioritizing and extracting summeries from docs is what we
             | do at gerev. https://github.com/gerevai/gerev to see for
             | yourself.
             | 
             | Or you could try our sweet little demo:
             | https://demo.gerev.ai
        
       | adamkochanowicz wrote:
       | Perplexity.ai is also worth considering. The chrome extension is
       | a game changer.
        
       | ck_one wrote:
       | It's a good step up from using pure GPT4.
       | 
       | What do you think how they built it? How do you access content
       | from other pages so quickly?
       | 
       | My guess is that they have crawled a lot of popular developer
       | docs pages (Mozilla, Stackoverflow, Youtube, etc) and created
       | embeddings for all paragraphs on these sites. Then for each
       | search query they use a clever prompt + use the knowledge from
       | the embeddings look up.
        
         | rushingcreek wrote:
         | Something like that :)
        
           | ck_one wrote:
           | It looks like you are using the bing api to search the web
           | and then somehow integrate the result into the answer. Will
           | do more digging ;)
           | 
           | Cool project and it seems like you and your team have working
           | on it long before the hype began.
        
       | celeritascelery wrote:
       | Wow. This is way more useful than google. I replaced popped in a
       | query I have been trying to figure out for the last 20 minutes
       | and this directed me right to the page I needed.
        
       | dvt wrote:
       | I asked it this question[1],                   I traverse a maze
       | using a basic A* implementation (using the Manhattan distance
       | metric). However, after the traversal, I would like to find out
       | what wall would give me the best alternative path. Apart from
       | removing every block and re-running A* on the maze, what's a more
       | clever and elegant solution?
       | 
       | a question I asked on SO over 10 years ago. The SO thread
       | includes working code and very friendly explanations and
       | discussion. The answer Phind gives is the following[2]. It tells
       | me to use D*-lite (complete overkill), Theta* (totally wrong), or
       | "Adaptive-A*" (not sure if that's an actual thing, all I can find
       | is a random paper).
       | 
       | I was working on this in the context of a game I was making at
       | the time, and while this is certainly a hard (and maybe rare)
       | question, it's still on the level of CS undergrad.
       | 
       | [1] https://stackoverflow.com/questions/2489672/removing-the-
       | obs...
       | 
       | [2]
       | https://www.phind.com/search?cache=d08cd0e7-4aa8-4d75-b1cd-7...
        
         | thethirdone wrote:
         | The SO answer is pretty good and probably the most
         | generalizable pathfinding solution.
         | 
         | My first thought was to also run A* from the end to the start.
         | This would allow you to look at each wall in the maze and check
         | if the A* cost from the start + A* cost from the end < best
         | current path. In my opinion, this would result in simpler code
         | than the SO solution.
        
           | CaptainNegative wrote:
           | An equivalent formulation to the SO solution with a simple
           | implementation is to double the vertices and edges in the
           | graph G by making a duplicate parallel universe G'. One can
           | always move from v in G to its corresponding v' in G' at zero
           | cost, but there is also a cost-1 edge from vertex u in G to
           | v' in G' whenever u and v are separated by a wall. Once one
           | crosses into G', there is no going back.
           | 
           | One can pass the new graph, G [?] G' plus all the
           | intermediate edges, into the already existing A*
           | implementation to search for an optimal s-t' path. This works
           | as long as the heuristic for v is also admissible for v', but
           | most are. I think all three of these algorithms could in
           | principle run into problems for certain uncommon admissible
           | heuristics.
        
           | dvt wrote:
           | > My first thought was to also run A* from the end to the
           | start. This would allow you to look at each wall in the maze
           | and check if the A* cost from the start + A* cost from the
           | end < best current path. In my opinion, this would result in
           | simpler code than the SO solution.
           | 
           | Yeah, this is the naive O(n^n) solution. Remove every wall,
           | see what path is the cheapest. Having come up with this, I
           | specifically wanted a more elegant solution. As it turns out,
           | you can do it in one shot (but it's a bit tricky).
        
             | thethirdone wrote:
             | I am not explaining an O(n^n) solution. Its an O(E) time
             | and O(V) space solution just like normal A*.
             | 
             | I am assuming you are saving the initial A*run and the
             | subsequent reverse run. Then `A* cost from the start + A*
             | cost from the end < best current path` is a O(1) time
             | operation that occurs a maximum of once per edge.
        
               | dvt wrote:
               | Maybe I'm totally misunderstanding, but figuring out the
               | "best current path" means re-running A* every time you
               | break a wall, as removing arbitrary walls can give you a
               | totally new path to the goal; to wit, it might be a path
               | not even originally visited by A*. And you have to do
               | that every time you try out a wall candidate, so to me
               | this appears to be quadratic(ish) complexity.
               | 
               | (But maybe this is exactly what the SO answer does "under
               | the hood," to be honest, I haven't done a deep complexity
               | analysis of it and I haven't thought about this problem
               | in ages.)
        
               | thethirdone wrote:
               | > Maybe I'm totally misunderstanding, but figuring out
               | the "best current path" means re-running A* every time
               | you break a wall, as removing arbitrary walls can give
               | you a totally new path to the goal; to wit, it might be a
               | path not even originally visited by A*. And you have to
               | do that every time you try out a wall candidate, so to me
               | this appears to be quadratic(ish) complexity.
               | 
               | My algorithm should obviously work using Dijkstra's
               | algorithm instead of A*. You just have to make sure ALL
               | nodes are explored. You don't have to run searches per
               | node.
               | 
               | Why it works with A* too is MUCH more subtle. In fact it
               | only works if your A* implementation is fair to all
               | likely shortest paths; most implementations do not
               | guarantee fairness. You can enforce fairness by changing
               | your heuristic to be only 0.9999 * Manhattan distance.
               | Fairness ensures that any path that will be the best path
               | after deleting a wall will have a cost recorded for both
               | sides of the wall.
               | 
               | > (But maybe this is exactly what the SO answer does
               | "under the hood," to be honest, I haven't done a deep
               | complexity analysis of it.)
               | 
               | If the original maze is 2D with coordinates (x,y), the SO
               | algorithm is essentially searching in a 3D maze with
               | coordinates `(x,y, number of times crossed a wall)` and
               | directional edges from `(x,y,n) to (x+dx,y+dy,n+1)` if
               | there is a wall there.*
        
               | dvt wrote:
               | > My algorithm should obviously work using Dijkstra's
               | algorithm instead of A*. You just have to make sure ALL
               | nodes are explored.
               | 
               | Gotcha, yeah, that's what I was thinking. You lose
               | basically all of A-star's optimization because you do
               | need all nodes explored (turning it into pure Dijkstra).
               | Makes total sense.
               | 
               | > If the original maze is 2D with coordinates (x,y), the
               | SO algorithm is essentially searching in a 3D maze with
               | coordinates
               | 
               | That's a neat way of looking at that answer, cool
               | insight!
        
         | rad_gruchalski wrote:
         | I asked OpenAI that exact question, this is what it said:
         | 
         | > One approach to finding the wall that would give you the best
         | alternative path without having to remove every block and re-
         | run A* is to use the concept of "edge weights."
         | 
         | > When you first run A* to traverse the maze, you can assign
         | each edge a weight equal to the distance between the two
         | adjacent nodes, which is equal to the Manhattan distance in
         | your case. Once you have completed the traversal, you can go
         | through each edge of the path that A* has found and temporarily
         | increase its weight by some factor. Then, you can run A* again,
         | but this time the pathfinder will prefer alternative paths that
         | do not use those edges that have been "blocked" by the
         | increased weight.
         | 
         | > By gradually increasing the weight of each edge one at a
         | time, you can determine which wall would give you the best
         | alternative path without having to re-run A* on the entire maze
         | every time. You can also use a binary search-like approach to
         | speed up the process.
         | 
         | > Here's some pseudocode for this approach:
         | path = A_star(start_node, goal_node, edge_weights)
         | for edge in path:             original_weight =
         | edge_weights[edge]             edge_weights[edge] =
         | original_weight * factor             new_path =
         | A_star(start_node, goal_node, edge_weights)             if
         | new_path is better than path:                 best_wall = edge
         | path = new_path             edge_weights[edge] =
         | original_weight
         | 
         | > In this pseudocode, factor is the amount by which you
         | increase the weight of each edge, and better than is some
         | metric for determining whether a path is better than another
         | (e.g., if it is shorter or has fewer turns).
         | 
         | Followed by this question: List ten most relevant links to the
         | information you have summarized above.
         | 
         | > Here are ten relevant links related to A* search and maze
         | traversal:                   "A* search algorithm" on
         | Wikipedia: https://en.wikipedia.org/wiki/A*_search_algorithm
         | "Manhattan distance" on Wikipedia:
         | https://en.wikipedia.org/wiki/Manhattan_distance         "A*
         | Pathfinding for Beginners" tutorial on Red Blob Games: https://
         | www.redblobgames.com/pathfinding/a-star/introduction.html
         | "Introduction to A*" video on YouTube by Sebastian Lague:
         | https://www.youtube.com/watch?v=-L-WgKMFuhE         "A*
         | Pathfinding Visualization" demo on GitHub by Qiao Zhang:
         | https://qiao.github.io/PathFinding.js/visual/         "A*
         | Pathfinding Tutorial" on Amit's Game Programming Pages:
         | http://theory.stanford.edu/~amitp/GameProgramming/         "A*
         | Pathfinding Algorithm" tutorial on GeeksforGeeks:
         | https://www.geeksforgeeks.org/a-search-algorithm/
         | "Introduction to Pathfinding" tutorial on Unity Learn:
         | https://learn.unity.com/tutorial/introduction-to-pathfinding
         | "Pathfinding in Games: A* Search" tutorial on Ray Wenderlich:
         | https://www.raywenderlich.com/3016-pathfinding-in-games-a-
         | search         "Maze generation algorithm" on Wikipedia:
         | https://en.wikipedia.org/wiki/Maze_generation_algorithm
         | 
         | > These links provide a range of resources, including
         | tutorials, demos, and background information on A* search and
         | maze generation.
         | 
         | Followed by: Please rewrite the lua code to golang.
         | path := AStar(startNode, goalNode, edgeWeights)
         | for _, edge := range path {             originalWeight :=
         | edgeWeights[edge]             edgeWeights[edge] =
         | originalWeight * factor             newPath := AStar(startNode,
         | goalNode, edgeWeights)             if
         | newPath.isBetterThan(path) {                 bestWall := edge
         | path = newPath             }             edgeWeights[edge] =
         | originalWeight         }
         | 
         | > In this Go code, AStar is a function that takes in the start
         | and goal nodes, as well as the edge weights, and returns the
         | optimal path using A* search. The isBetterThan method is a
         | placeholder for some comparison function that determines
         | whether a path is better than another (e.g., if it is shorter
         | or has fewer turns).
         | 
         | > Note that the syntax and semantics of Go are quite different
         | from Lua, so this is not a direct translation. However, the
         | logic should be equivalent, and the Go code should be easy to
         | understand if you are familiar with the original Lua code.
         | 
         | Finally: Show the declaration of the AStar struct.
         | 
         | > Sure! Here is an example declaration of an AStar struct in
         | Go:                   type AStar struct {             startNode
         | Node             goalNode    Node             edgeWeights
         | map[Edge]float64         }
         | 
         | > In this example, the AStar struct has three fields:
         | startNode: a Node struct representing the starting node of the
         | pathfinding search.         goalNode: a Node struct
         | representing the goal node of the search.         edgeWeights:
         | a map from Edge structs to float64 values representing the
         | weights of each edge in the graph.
         | 
         | > Note that this is just an example, and the actual definition
         | of the AStar struct may depend on the specific implementation
         | of the pathfinding algorithm. The Node and Edge structs may
         | also need to be defined separately, depending on how they are
         | used in the implementation.
         | 
         | And as a cherry on top: Inline comments in the AStar struct.
         | 
         | > Sure, here's an example of adding inline comments to the
         | AStar struct I defined earlier:                   type AStar
         | struct {             startNode   Node                   // The
         | starting node of the pathfinding search.             goalNode
         | Node                   // The goal node of the search.
         | edgeWeights map[Edge]float64       // A map from edges to their
         | weights in the graph.         }
         | 
         | > In this example, I added comments to explain the purpose of
         | each field in the AStar struct. The comments describe what each
         | field represents and what it is used for in the pathfinding
         | algorithm. By adding comments to the struct definition, it
         | makes it easier for others to understand the role of each field
         | in the codebase.
        
           | dvt wrote:
           | > When you first run A* to traverse the maze, you can assign
           | each edge a weight equal to the distance between the two
           | adjacent nodes, which is equal to the Manhattan distance in
           | your case. Once you have completed the traversal, you can go
           | through each edge of the path that A* has found and
           | temporarily increase its weight by some factor. Then, you can
           | run A* again, but this time the pathfinder will prefer
           | alternative paths that do not use those edges that have been
           | "blocked" by the increased weight.
           | 
           | Yeah, this is wrong for two reasons. First, it's actually
           | doing things backwards (you would _decrease_ the cost of a
           | wall, you wouldn 't need to block any path). And second, it's
           | still predicated on re-running A* after modifying costs (and
           | comparing it with our original path), getting us back to
           | O(n^n).
           | 
           | This is more evidence that AI models are good at
           | regurgitating, but not great at _reasoning_. GPT-4 probably
           | only saw this problem once or twice (since it was trained on
           | the Pile, which includes all of SO, including my original
           | question), but not enough times to understand what the
           | solution is (or even what you 're asking, for that matter).
        
             | rad_gruchalski wrote:
             | The unfortunate part is that OpenAI cannot answer a
             | question "show me most relevant stackoverflow links for my
             | original question".
             | 
             | Edit: however, I went on and grilled it further.
             | 
             | Question: Rewrite the golang code to solve the maze in
             | constant time.
             | 
             | > I'm sorry, but it is not possible to solve a maze in
             | constant time, as the time complexity of the problem
             | depends on the size of the maze and the complexity of the
             | search algorithm used.
             | 
             | > The A* algorithm and its variants are some of the most
             | efficient and widely used algorithms for maze pathfinding,
             | as they can provide near-optimal solutions in a reasonable
             | amount of time. However, they still have a time complexity
             | that is exponential in the worst case.
             | 
             | > There are other algorithms that can solve certain types
             | of mazes faster than A _, such as maze-solving algorithms
             | that rely on distance transforms, flood fills, or
             | topological sorting. However, these algorithms are
             | typically specialized and may not work for all types of
             | mazes.
             | 
             | > In summary, it is not possible to solve a general maze in
             | constant time, but there are specialized algorithms that
             | can solve certain types of mazes faster than A_.
             | 
             | A couple of attempts to rephrase make the GPT a bit
             | apologetic. It claims it's not possible to do anything
             | about, eventually, when asked to solve the maze of size 20
             | times 20, it replies:
             | 
             | > With a maze size of 20x20, it is likely that the
             | performance of the A* algorithm is already sufficient for
             | practical purposes. However, if you encounter performance
             | issues or have specific constraints that require faster
             | solutions, you can consider implementing these optimization
             | techniques.
             | 
             | So I ask it: What optimization techniques I could
             | implement?
             | 
             | > There are several optimization techniques that can be
             | implemented to improve the performance of the A* algorithm
             | or other maze-solving algorithms. Here are some examples:
             | Use a more informed heuristic: The Manhattan distance
             | heuristic used in the original code is a good general-
             | purpose heuristic, but it may not be the most efficient for
             | all maze configurations. Using a more informed heuristic,
             | such as Euclidean distance or diagonal distance, can lead
             | to faster convergence and fewer nodes being explored.
             | Implement early termination: If you only need a suboptimal
             | solution or are willing to tolerate some error in the
             | solution, you can terminate the search early once a "good
             | enough" solution has been found. For example, you can set a
             | maximum number of nodes to explore or a maximum search
             | depth.              Prune search space: If you know some
             | parts of the maze are not relevant to the solution, you can
             | exclude those parts from the search space to reduce the
             | number of nodes that need to be explored. For example, if
             | you know the goal is located in a certain region of the
             | maze, you can exclude nodes that are outside of that region
             | from the search space.              Use a better data
             | structure: The performance of the A* algorithm is heavily
             | dependent on the data structure used to store the open and
             | closed sets. Using a more efficient data structure, such as
             | a binary heap or Fibonacci heap, can speed up the
             | algorithm.              Parallelize the search: If you have
             | access to multiple processing units, you can split the
             | search space into multiple parts and run the A* algorithm
             | in parallel. This can significantly reduce the search time,
             | especially for large mazes.
             | 
             | > These techniques are not mutually exclusive, and you can
             | combine them in different ways to achieve better
             | performance. However, keep in mind that the optimal
             | combination of techniques will depend on the specifics of
             | the maze-solving problem and the available computational
             | resources.
             | 
             | I still find it pretty good. It also proves my point I was
             | making somewhere else. The challenge in applying GPT to
             | software problems is knowing what to ask next and verifying
             | that it gave a correct answer - as in, one needs to
             | understand the problem without blindly trusting that what
             | it said was right.
        
         | devit wrote:
         | Here you can apply the most common technique for such problems,
         | which is to create a graph whose vertices are pairs made of a
         | vertex of the original graph, plus the "state" of the traversal
         | (or in other words, the essential information about the path
         | used to reach the vertex).
         | 
         | In this case, the state is the number of walls passed, so just
         | create a graph made of (v, k) pairs where for adjacent v and w
         | in the grid, (v, k) connects to (w, k) if there is no wall, and
         | it connects to (w, k + 1) if there is a wall.
         | 
         | Then run A*, finding the shortest path from (start, 0) to (end,
         | 1), reconstruct the path and look at where it transitions from
         | a (v, 0) to a (w, 1) and then return the wall between v and w.
         | 
         | You can use this for all sorts of other constraints, like
         | finding a path that only changes direction up to N times, or a
         | path where you don't get eaten by the monster moving
         | deterministically (in this case the state is the monster
         | position), or a path where you spend up to N time underwater
         | consecutively, etc.
         | 
         | But GPT-4 seems very bad at solving problems, so even though
         | this is an easy problem, it's not unexpected that it would not
         | come up with this solution.
        
         | sebzim4500 wrote:
         | Personally, I'd find that prompt difficult to understand
         | without the title of the stackoverflow question. Did you
         | include that?
        
           | xiphias2 wrote:
           | Even just writing the title and nothing else gives more
           | interesting answer:
           | 
           | https://www.phind.com/search?cache=0e527db3-7740-470e-bba6-5.
           | ..
        
           | dvt wrote:
           | No, but I'm not sure if it would make much of a difference,
           | feel free to try it out.
        
       | 12907835202 wrote:
       | Whoever owns the Ask Jeeves trademark has the perfect moment for
       | a comeback if they get it right
        
       | aaviator42 wrote:
       | I was wondering if it'll be able to pull documentation for a (not
       | popular at all) library I wrote from Github, and it seemed to the
       | get the github repo right, but then hallucinated the functions.
       | Still v cool!
       | 
       | https://www.phind.com/search?cache=f14760a0-a409-44d6-aa8f-e...
        
         | rushingcreek wrote:
         | Running with Expert mode gets it right!
         | https://www.phind.com/search?cache=7cde3ce1-4b27-4f21-98a0-1...
        
       | noobmax wrote:
       | [dead]
        
       | layer8 wrote:
       | This looks quite nice. One suggestion: Use a font with equal-
       | width decimal digits. Otherwise the [0][1][2] links look weird.
        
       | supermatt wrote:
       | I have been using phind on and off for a few months. I found it
       | amazing for discovery of software libs for a project I was
       | working on. I could not find the libs when searching google, etc,
       | but found them through phind.
       | 
       | When I compared the output of phind to GPT-3 I found phind vastly
       | superior for this kind of discovery. Were you previously
       | augmenting the expert with GPT-3 or was it some custom model?
       | 
       | Best of luck for the new launch!
        
         | rushingcreek wrote:
         | Love to hear it! Expert mode is a new feature that has always
         | been GPT-4 augmented with our custom web context.
        
           | supermatt wrote:
           | I didnt mean expert mode. I mean the AI answer thing. That
           | definitely predated GPT-4, no?
        
             | rushingcreek wrote:
             | Yes, we launched in January 2022 using our own models
             | exclusively. We generally use a combination of our own
             | models + OpenAI but are transitioning increasingly to our
             | own models once again.
        
       | redleggedfrog wrote:
       | It's trained on other peoples (less than great) code? Cause the
       | results I'm getting wouldn't pass my company's code reviews.
        
       ___________________________________________________________________
       (page generated 2023-04-12 23:00 UTC)