[HN Gopher] I wrote a meta mode for ChatGPT
       ___________________________________________________________________
        
       I wrote a meta mode for ChatGPT
        
       Author : airesearcher
       Score  : 57 points
       Date   : 2023-12-10 20:06 UTC (2 hours ago)
        
 (HTM) web link (www.novaspivack.com)
 (TXT) w3m dump (www.novaspivack.com)
        
       | airesearcher wrote:
       | I wrote a custom instruction for ChatGPT Pro that radically
       | improves productivity inside ChatGPT. Add it to your custom
       | instructions and enjoy!
        
         | philipswood wrote:
         | Links to a WordPress login page.
         | 
         | You mean probably meant:
         | 
         | https://www.novaspivack.com/technology/nova-mode-the-ultimat...
        
           | airesearcher wrote:
           | Ah yes good point
        
         | Tiberium wrote:
         | Can you share how it radically improves productivity? From a
         | quick glance it seems to be not very practical, at least for
         | me.
        
       | airesearcher wrote:
       | Main URL has been corrected thanks to the moderators.
        
       | dr_dshiv wrote:
       | It's neat but I would love to see some more examples of why you
       | think it is good. I tend to be skeptical of adding anything to
       | the context window that isn't closely aligned with my goals. My
       | biggest issue is usually getting chatGPT to get out of the
       | "blurry center" (where it just blathers pablum) to a "productive
       | spike" where it can genuinely do specific work.
        
         | sp332 wrote:
         | I agree, at least add some sample outputs so we can see what
         | we're getting into.
        
           | airesearcher wrote:
           | See my post and scroll down. Several examples there.
           | 
           | You can also add the instructions and type //? For the
           | manual.
           | 
           | Or type //$ for a list of 40 examples
        
             | airesearcher wrote:
             | I will make a GPT version... ChatGPT has to approve it for
             | some reason....
             | 
             | ok it's working ... had to remove "ChatGPT" from name of my
             | GPT for it to work...
        
               | airesearcher wrote:
               | https://chat.openai.com/g/g-tcXXGxXmA-nova-mode-ai-chat-
               | auth...
               | 
               | GPT Version, with more features
        
       | simonw wrote:
       | I suggest also making this available as a GPT. I don't like
       | pasting random stuff into my custom instructions, because that
       | will affect all of my future usage. I'd much rather try out a GPT
       | where the effects of those instructions stay limited to that one
       | place.
        
         | airesearcher wrote:
         | https://chat.openai.com/share/3ebde0ee-5db6-44c3-a836-2b4ee3...
        
           | airesearcher wrote:
           | Waiting for ChatGPT to approve this link... not sure it is
           | public yet
        
             | airesearcher wrote:
             | ok it's working ... had to remove "ChatGPT" from name of my
             | GPT for it to work...
        
       | bongodongobob wrote:
       | I don't see why this is good. You're clogging your context with a
       | bunch of unnecessary clutter. Just tell it what you want it to
       | do, no? Like why am I spending 1500 characters per message on the
       | hello world loop example? I get the same output from just asking
       | it to do that.
       | 
       | The message indexing is kind of interesting, but again, it's a
       | huge waste. Just write a wrapper rather than wasting all those
       | tokens and muddying your context.
       | 
       | I think in the end this is just eye candy and is going to get you
       | worse results. Granted, I haven't tested thoroughly, but neither
       | has OP.
        
         | airesearcher wrote:
         | It's useful because it allows you to refer to messages in the
         | chat by message number, and to define functions to use on them.
         | You can do a lot of really powerful things with it.
        
       | airesearcher wrote:
       | Here it is a GPT if you want to try it that way:
       | 
       | https://chat.openai.com/share/3ebde0ee-5db6-44c3-a836-2b4ee3...
        
         | airesearcher wrote:
         | I will make a GPT version... ChatGPT has to approve it for some
         | reason....
        
           | airesearcher wrote:
           | It works now
        
       | sitkack wrote:
       | Neat. I did something similar and created a stack based NLP
       | language, I used it, mainly for synthesizing prompts for image
       | generation.
        
       | airesearcher wrote:
       | Type //? for the manual
        
       | EMM_386 wrote:
       | I'm not much interested in the prompt itself, but I am
       | continually amazed that ChatGPT is able to make sense of prompts
       | like this.
       | 
       | That is pretty far from "language", and I can't see how any of
       | that has been seen in its "training data".
       | 
       | I mean ... you can add something like "//! = loop mode = do loop:
       | (ask USR for instruct (string, num); iterate string num times);
       | Nesting ok." and it can not only parse that (or "tokenize" it),
       | then somehow find relationships in its internal high-dimensional
       | vector space that are sufficient to ... pseudo-"execute" it?
       | 
       | I don't know. Obviously, not my area of expertise, although I can
       | say I've spent a lot of time _trying_ to understand even the
       | basics. But then I 'll see an example like this, and be reminded
       | of how little I understand any of it.
        
         | jadbox wrote:
         | I wonder how much "gpt" within openai understands and how much
         | syntax is conditionally parsed and run though state machines
         | prior to the prompt sent to transformers.
        
         | huijzer wrote:
         | I did my MSc thesis on ML in 2019 and so I'm definitely no
         | expert but one mental model worked well for me.
         | 
         | Machine learning models learn thousands of small functions
         | which map input to output. A higher level "function" will
         | choose which function is used to map the input to the output.
         | So that's what amazed me in about ChatGPT: think about all the
         | little things it must understand to do what it does. For
         | example, the model learned how to interpret if clauses and
         | other language nuances, how variable definitions should be
         | used, what words are in other languages.
        
         | AndrewKemendo wrote:
         | If you think about it relative to the level of specific
         | interaction you're doing with a computer, then it makes sense
         | 
         | So, for example, if I'm talking to a compiler, I have to talk
         | to it in effectively perfect sequential structure
         | 
         | If I'm talking to a web server, then it's with a scripting
         | language in order to communicate something from another
         | programming language that's interacting it lower level
         | 
         | At the other extreme if I'm talking to a kernel, then I really
         | only have a couple of things I can say because its language is
         | memory blocks
         | 
         | In this case, because I'm talking to what's effectively an
         | interpreter, then the default language looks like a scripting
         | language rather than a programming language or natural language
         | 
         | Which in effect is what's happening? These are basically
         | scripts people are running against the structure of the GPT
         | sequence interpretation - so it's a weird mix
        
       ___________________________________________________________________
       (page generated 2023-12-10 23:00 UTC)