[HN Gopher] ChatGPT Enterprise
       ___________________________________________________________________
        
       ChatGPT Enterprise
        
       Author : davidbarker
       Score  : 421 points
       Date   : 2023-08-28 17:09 UTC (5 hours ago)
        
 (HTM) web link (openai.com)
 (TXT) w3m dump (openai.com)
        
       | pradn wrote:
       | Non-use of enterprise data for training models is table-stakes
       | for enterprise ML products. Google does the same thing, for
       | example.
       | 
       | They'll want to climb the compliance ladder to be considered in
       | more highly-regulated industries. I don't think they're quite
       | HIPAA-compliant yet. The next thing after that is probably in-
       | transit geofencing, so the hardware used by an institution reside
       | in a particular jurisdiction. This stuff seems boring but it's an
       | easy way to scale the addressable market.
       | 
       | Though at this point, they are probably simply supply-limited.
       | Just serving the first wave will keep their capacity at a
       | maximum.
       | 
       | (I do wonder if they'll start offering batch services that can
       | run when the enterprise employees are sleeping...)
        
       | ftxbro wrote:
       | > For all enterprise customers, it offers:         > Customer
       | prompts and company data are not used for training OpenAI models.
       | > Unlimited access to advanced data analysis (formerly known as
       | Code Interpreter)         > 32k token context windows for 4x
       | longer inputs, files, or follow-ups
       | 
       | I'd thought all those had been available for non enterprise
       | customers, but maybe I was wrong, or maybe something changed.
        
         | cowthulhu wrote:
         | I believe the API (chat completions) has been private for a
         | while now. ChatGPT (the chat application run by OpenAI on their
         | chat models) has continued to be used for training... I believe
         | this is why it's such a bargain for consumers. This
         | announcement allows businesses to let employees use ChatGPT
         | with fewer data privacy concerns.
        
           | whimsicalism wrote:
           | You can turn off history & training on your data
        
             | mirekrusin wrote:
             | Yes they bundled it under single dark pattern toggle so
             | most people won't click it.
        
               | swores wrote:
               | Worse (IMO) than that is the fact that when the privacy
               | mode is turned on, you can't access your previously saved
               | conversations nor will it save anything you do while it's
               | enabled. Really shitty behaviour.
        
             | hammock wrote:
             | If you turn off history and training, you as the user can
             | no longer see your history, and OpenAI won't train with
             | your data. But can customer prompts and company data still
             | be resold to data brokers?
        
             | thomassmith65 wrote:
             | Note that turning 'privacy' on is buried in the UI; turning
             | it off again requires just a single click.
             | 
             | Such dark patterns, plus their involvement in crypto, their
             | shoddy treatment of paying users, their security
             | incidents... make it harder for me to feel good about
             | OpenAI spearheading the introduction of (real) AI into the
             | world today.
        
               | whimsicalism wrote:
               | > Such dark patterns, plus their involvement in crypto,
               | their shoddy treatment of paying users, their security
               | incidents... make it harder for me to feel good about
               | OpenAI spearheading the introduction of (real) AI into
               | the world today.
               | 
               | Interesting. My opinion is it is a great product that
               | works well for me, I don't find my treatment as a paying
               | user shoddy, and their security incident gives me pause.
        
               | thomassmith65 wrote:
               | > I don't find my treatment as a paying user shoddy
               | 
               | I have never payed for a service with worse uptime in my
               | life than ChatGPT. Why? So that OpenAI could ramp up
               | their user-base of both free and paying users. They
               | knowingly took on far more paying users than they could
               | properly support for months.
               | 
               | There are justifications for the terrible uptime that are
               | perfectly valid, but in the end, a customer-focused
               | company would have issued a refund to the paying
               | customers for the months during which they were shafted
               | by OpenAI prioritizing growth.
               | 
               | That doesn't mean OpenAI isn't terrific in _some_ ways.
               | They 're also lousy _in others_. With so many tech
               | companies, the lousy aspects grow in significance as the
               | years pass. OpenAI, because of all the reasons in my
               | parent comment, is not off to a great start, imo.
        
               | astrange wrote:
               | They're not involved in crypto, just the CEO is.
        
               | thomassmith65 wrote:
               | That's an important correction. Thanks, I got a bit
               | carried away with the comment. There's enough hearsay on
               | the internet, and I don't want to contribute.
               | 
               | While we're at it, another exaggeration I made is
               | "security incidents"; in fact, I am only aware of one.
        
         | BoorishBears wrote:
         | It is pretty much is if you use OpenAI via Azure, or you're
         | large enough and talk to their sales (the 2x faster is
         | dedicated capacity I'm guessing)
        
         | SantalBlush wrote:
         | > Customer prompts and company data are not used for training
         | OpenAI models.
         | 
         | This is borderline extortion, and it's hilarious to witness as
         | someone who doesn't have a dog in this fight.
        
           | jacquesm wrote:
           | As long as they provide free Enterprise access for all those
           | whose data they already stole...
        
           | swores wrote:
           | Not really, they want some users to give them conversation
           | history for training purposes and offer cheaper access to
           | people willing to provide that.
        
             | kuchenbecker wrote:
             | Exactly, there is an opportunity cost to NOT training on
             | this data.
        
             | SantalBlush wrote:
             | This assumes the portion of the enterprise fee related to
             | this feature is only large enough to cover the cost of
             | losing potential training data, which is an absurd
             | assumption that can't be proven and has no basis in
             | economic theory.
             | 
             | Companies are trying to maximize profit; they are not
             | trying to minimize costs so they can continue to do you
             | favors.
             | 
             | These arguments creep up frequently on HN: "This company is
             | doing X to their customers to offset their costs." No, they
             | are a company, and they are trying to make money.
        
         | nsxwolf wrote:
         | I'm going to see if the word "Enterprise" convinces my
         | organization to allow us to use ChatGPT with our actual
         | codebase, which is currently against our rules.
        
           | SanderNL wrote:
           | No copilot too?
        
         | _boffin_ wrote:
         | What about prompt input and response output retention for x
         | days for abuse monitoring? does it not do that for enterprise?
         | For Microsoft Azure's OpenAI service, you have to get a waiver
         | to ensure that nothing is retained.
        
         | saliagato wrote:
         | everything but 32k version and 2x speed is the same as the
         | consumer platform
        
           | swores wrote:
           | https://news.ycombinator.com/item?id=37298864
           | 
           | Having conversations saved to go back to like in the default
           | setting on Pro, that's disabled when a Pro user turns on the
           | privacy setting, is another big difference.
        
           | jwpapi wrote:
           | 32k is available via API
        
         | hammock wrote:
         | >Customer prompts and company data are not used for training
         | OpenAI models.
         | 
         | That's great. But can customer prompts and company data be
         | resold to data brokers?
        
         | brabel wrote:
         | I think the real feature is this:
         | 
         | " We do not train on your business data or conversations, and
         | our models don't learn from your usage. ChatGPT Enterprise is
         | also SOC 2 compliant and all conversations are encrypted in
         | transit and at rest. "
        
           | hammock wrote:
           | >" We do not train on your business data or conversations,
           | and our models don't learn from your usage. ChatGPT
           | Enterprise is also SOC 2 compliant and all conversations are
           | encrypted in transit and at rest. "
           | 
           | That's great. But can customer prompts and company data be
           | resold to data brokers?
        
           | dahwolf wrote:
           | It's exactly opposite. The entire point of an enterprise
           | option would be that you DO train it on corporate data,
           | securely. So the #1 feature is actually missing, yet is
           | announced as in the works.
        
             | __loam wrote:
             | What are you talking about?
        
             | IanCal wrote:
             | You probably wouldn't want that, you'd want to integrate
             | with your data for lookups but rarely for training a new
             | model.
        
               | dahwolf wrote:
               | Can't believe the pushback I'm getting here. The use case
               | is stunningly obvious.
               | 
               | Companies want to dump all their Excels in it and get
               | insights that no human could produce in any reasonable
               | amount of time.
               | 
               | Companies want to dump a zillion help desk tickets into
               | and gain meaningful insights from it.
               | 
               | Companies want to dump all their Sharepoints and Wikis
               | into it that currently nobody can even find or manage,
               | and finally have functioning knowledge search.
               | 
               | You absolutely want a privately trained company model.
        
               | IanCal wrote:
               | None of the use cases you are describing require training
               | a new model. You really don't want to train a new model,
               | that's not a good way of getting them to learn reliable
               | facts and do so without losing other knowledge. The fine
               | tuning for GPT 3.5 suggests something like _under a
               | hundred examples_.
               | 
               | What you want is to get an existing model to search a
               | well built index of your data and use that information to
               | reason about things. That way you also always have
               | entirely up to date data.
               | 
               | People aren't missing the use cases you describe, they're
               | disagreeing as to how to achieve those.
        
             | blowski wrote:
             | Coca Cola doesn't want to train a model that can be bought
             | by Pepsi.
        
               | no_wizard wrote:
               | I'm imagining some corporate scenario where Coca Cola or
               | Pepsi are purposefully training models on poisoned
               | information so they can out each other for trying to use
               | AI services like ChatGPT to glean information about
               | competitors via brute force querying of some type
        
               | beardedwizard wrote:
               | But that's exactly the point, an enterprise offering
               | should be able to provide guarantees like this while also
               | allowing training - model per tenant. I think the reality
               | is they are doing multi-tenant models which means they
               | have no way guarantee your data won't be leaked unless
               | they disable training altogether.
        
               | dahwolf wrote:
               | Well, the idea is that you can't buy the training model
               | of a competitor.
        
           | ftxbro wrote:
           | Which part of that is new, because I was pretty sure they
           | were saying "we do not train on your business data or
           | conversations, and our models don't learn from your usage"
           | already. Maybe the SOC 2 and encryption is new?
        
             | vidarh wrote:
             | They don't train on data when you either use the _API_ or
             | disable chat history, which is inconvenient.
        
               | justanotheratom wrote:
               | yes, this is terrible. I want chat history, but I don't
               | want them to use my data. Can't have both, even though I
               | am paying $20/month!
        
               | air7 wrote:
               | Really? This seems like one Chrome extension away...
        
               | varispeed wrote:
               | so that someone else gets your data?
               | 
               | Chrome extension is a no go.
        
               | flangola7 wrote:
               | Who says it can't save it to a local database?
        
               | Hrundi wrote:
               | It can, until the extension developer receives a tempting
               | offer for it, as has happened countless times
        
               | littlestymaar wrote:
               | Fork the extension and use your own then.
        
         | bg24 wrote:
         | I think you missed this part:
         | 
         | ChatGPT Enterprise is also SOC 2 compliant and all
         | conversations are encrypted in transit and at rest. Our new
         | admin console lets you manage team members easily and offers
         | domain verification, SSO, and usage insights, allowing for
         | large-scale deployment into enterprise.
         | 
         | I think this will have a solid product-market-fit. The product
         | (ChatGPT) was ready but not enterprise. Now it is. They will
         | get a lot of sales leads.
        
           | ttul wrote:
           | Just the SOC2 bit will generate revenue... If your
           | organization is SOC2 compliant, using other services that are
           | also compliant is a whole lot easier than risking having your
           | SOC2 auditor spend hours digging into their terms and
           | policies.
        
           | _jab wrote:
           | "all conversations are encrypted ... at rest" - why do
           | conversations even need to _exist_ at rest? Seems sus to me
        
             | flangola7 wrote:
             | Chat history is helpful.
        
       | siva7 wrote:
       | There is the old silicon valley saying "This is a feature, not a
       | product". Translated to the new AI age this is the moment were
       | many startups will realize that what they were building wasn't a
       | product but just a feature extension of chatGPT.
        
         | [deleted]
        
         | warthog wrote:
         | Sad but seems to be correct with OpenAI showing its true colors
        
       | holoduke wrote:
       | Are there already some profitable businesses using chatgpt i am
       | wondering. To me the tech is really impressive. But what kind of
       | really big commercial product exists at this point? I only know
       | of assistants like copilot or some word assistant. But what else?
       | Isnt this just a temporary bubble?
        
         | tspike wrote:
         | If you're asking about consumer facing products, I'm aware of
         | eBay using it to help sellers write product descriptions. But,
         | I think the bigger immediate use case is making daily work
         | easier inside these companies.
         | 
         | I've used it extensively to speed up the process of making
         | presentations, drafting emails, naming things, rubber-ducking
         | for coding, etc.
        
       | [deleted]
        
       | whalesalad wrote:
       | "we are bleeding money on these H100 machines, we need enterprise
       | contracts asafp"
        
         | [deleted]
        
         | dominojab wrote:
         | [dead]
        
       | hellodanylo wrote:
       | I don't seem to understand where OpenAI's market segment ends and
       | Azure's begins.
        
         | TheGeminon wrote:
         | There will probably be overlap. If you are an Azure customer
         | you use Azure, if not you use OpenAI.
        
           | KeplerBoy wrote:
           | It's Azure all the way down. The OpenAI stuff is certainly
           | hosted on Azure.
        
         | phillipcarter wrote:
         | It's helpful to think of OpenAI as Microsoft's R&D lab for AI
         | without the political and regulatory burdens that MSR has to
         | abide by. Through that lens, it's really all just the same
         | thing. There is no endgame for OpenAI that doesn't involve
         | being a part of Microsoft.
        
       | blitzar wrote:
       | Wake me up when they launch _The Box (tm)_.
        
         | [deleted]
        
         | sxates wrote:
         | I'm holding out for the Signature Edition
        
       | kelseyfrog wrote:
       | This would be such a no-brainer purchase if there was server-side
       | logit warping a la grammar-base sampling or jsonformer.
        
       | rvz wrote:
       | Seems like they are quite startled with LLama 2 and Code Llama,
       | and how its rapid adoption is accelerating the AI race to zero.
       | Why have this when Llama 2 and Code Llama exists and brings the
       | cost close to $0?
       | 
       | This sound like a huge waste of money for something that should
       | just be completely on-device or self-hosted if you don't trust
       | cloud-based AI models like ChatGPT Enterprise and want it all
       | private and low cost.
       | 
       | But either way, Meta seems to be already at the finish line in
       | this race and there is more to AI than the LLM hype.
        
         | YetAnotherNick wrote:
         | If you could offer stable 70B llama API at half the price of
         | ChatGPT API I would pay for it. I know HN likes to believe
         | everything is close to $0, but it is hardly the case.
        
           | coolspot wrote:
           | (Not affiliated) https://together.ai/pricing
        
             | YetAnotherNick wrote:
             | So it is 50% more expensive than OpenAI. Even if that was
             | comparable it proves my point that you can hardly do it for
             | "cost close to $0".
        
         | make3 wrote:
         | I'm really not sure at all this can be interpreted as them
         | being startled at LLama 2 at all.
         | 
         | From the very beginning everyone knew data privacy & security
         | would be one of the main issues for corporations.
        
         | willsmith72 wrote:
         | most teams don't want to self-host, and definitely don't want
         | to have to run on-device eating up their ram
        
           | whimsicalism wrote:
           | There is no reason these models will be selfhost only.
        
             | willsmith72 wrote:
             | agreed, and I can't wait for gpt4 to have great competition
             | in terms of ease, price and performance. I was responding
             | to this
             | 
             | > something that should just be completely on-device or
             | self-hosted if you don't trust cloud-based AI models like
             | ChatGPT Enterprise and want it all private and low cost
        
           | lancesells wrote:
           | I get the self-host part, but if you had a dedicated machine
           | would the ram be an issue? Can you run it on a machine with
           | like 128GB of ram or the GPU equivalent?
        
         | sebzim4500 wrote:
         | Llama 2 is nowhere near the capability of GPT-4 for general
         | purpose tasks
        
         | mliker wrote:
         | I can see some companies not having the technical ability to
         | pull off offline LLMs, so this product could cater to that
         | market.
        
           | Patrick_Devine wrote:
           | Maybe, but that's why things like ollama.ai are trying to
           | fill the gap. It's simple, and you don't need all of the
           | heavy weight enterprise crap if nothing ever leaves your
           | system.
        
           | [deleted]
        
         | rangledangle wrote:
         | Less technical companies throw money at problems to solve them.
         | Like mine, sadly... Even if it takes a small amount of effort,
         | companies will throw money for zero effort.
        
           | runnerup wrote:
           | Zero execution risk, rather than zero effort. There's always
           | a 10% chance that implementation goes on forever and spending
           | some money eliminates that risk.
        
           | _zoltan_ wrote:
           | why should they solve it? if it's not a core competency, just
           | buy it.
        
         | screamingninja wrote:
         | > This sound like a huge waste of money for something that
         | should just be completely on-device or self-hosted
         | 
         | I can imagine this argument being made repeatedly over the past
         | several decades whenever anyone makes a decision to use any
         | paid cloud service. There is a value in self-hosting FOSS
         | services and managing it in house and there is a value in
         | letting someone else manage it for you. Ultimately it depends
         | on the business use case and how much effort / risk you are
         | willing to handle.
        
       | agnokapathetic wrote:
       | Because it's clear as mud from a privacy perspective:
       | 
       | # OpenAI Offerings
       | 
       | - ChatGPT Free - trains on your data unless you Opt Out
       | 
       | - ChatGPT Plus - trains on your data unless you Opt Out
       | 
       | - ChatGPT Enterprise - does _not_ train on your data
       | 
       | - OpenAI API - does _not_ train on your data
       | 
       | # Microsoft Offerings
       | 
       | - GitHub Copilot - trains on your data
       | 
       | - GitHub Copilot for Business - does _not_ train on your data
       | 
       | - Bing Chat - trains on your data
       | 
       | - Bing Chat Enterprise - does _not_ train on your data
       | 
       | - Microsoft 365 Copilot - does _not_ train on your data
       | 
       | - Azure OpenAI Service - does _not_ train on your data
       | 
       | Opt-out link: https://help.openai.com/en/articles/7039943-data-
       | usage-for-c...
        
         | pastor_bob wrote:
         | >- ChatGPT Plus - trains on your data unless you Opt Out
         | 
         | And if you opt out, they delete the chats from your history so
         | you can't reference them later for your own use. Slick!
        
           | agnokapathetic wrote:
           | There are two different mechanisms here:
           | 
           | Disabling "Chat History / Training" in ChatGpT Settings will
           | disable chat history.
           | 
           | Opting out through the linked form in that FAQ will allow you
           | to keep chat history.
        
         | [deleted]
        
           | [deleted]
        
         | blibble wrote:
         | > We do not train on your business data or conversations, and
         | our models don't learn from your usage.
         | 
         | doesn't say they're not selling it to someone else who might...
        
           | agnokapathetic wrote:
           | This would be a violation of GDPR and CCPA and would expose
           | them to massive litigation liability.
        
       | cgen100 wrote:
       | I hope no one will be disappointed, who has done a good job as an
       | employee suddenly finding himself replaced by a snippet of code,
       | after the weights have been sufficiently adjusted.
       | 
       | It's like slurping the very last capital a worker has out of its
       | mind and soul. Most companies exist to make a profit, not to
       | employ humans.
       | 
       | Paired with the pseudo-mocked-tech-bro self-peddling BS this
       | announcement reads like dystopia to me. Not that technological
       | progress is bad, but technology should empower users (for real,
       | by giving them more control) not increase power imbalances.
       | 
       | Let's see how many people who cheered today will cheer just as
       | happily in 2028. My bet: just a few.
        
         | paulddraper wrote:
         | > technology should empower users (for real, by giving them
         | more control) not increase power imbalances
         | 
         | Technology should make life easier.
         | 
         | Automation is good.
        
           | cgen100 wrote:
           | > Technology should make life easier.
           | 
           | A totalitarian state can make your life very comfortable with
           | technology. Wanna trade for freedom?
           | 
           | Automation is the best, if the majority can benefit from it.
        
           | lewhoo wrote:
           | I guess life is easy if you have no job but up until your
           | food runs out.
        
         | i-use-nixos-btw wrote:
         | A common retort to this is that companies also exist to compete
         | (and thus make a profit), so those that use AI to augment their
         | staff rather than replace them will be at an advantage.
         | 
         | Honestly, I can see it, but there are definitely SOME jobs at
         | risk, and it will almost certainly reduce hiring in junior
         | positions.
         | 
         | I am a manager in a dev team. I have a small team and too many
         | plates spinning, and I've been crying out for more hires for
         | years.
         | 
         | I moved to using AI a lot more. ChatGPT and Copilot for general
         | dev stuff, and I'm experimenting with local llama-based models
         | too. It's not that Im getting these things to fill any one
         | role, but to reduce the burden on the roles we have. Honestly,
         | as things stand, I'm not crying out for more hires any more.
        
           | cgen100 wrote:
           | I'm all for making us all more efficient, but not at the cost
           | of creating new data monopolies, if possible. The price is
           | very high, even though it's not immediately obvious.
           | 
           | We already have enormous concentration of data in a few
           | places and it's only getting worse. Centralization is
           | efficiency, but the benefits of that get skimmed
           | disproportionally, to the detriment of what allowed these
           | systems to emerge in the first place: our society.
        
       | simonw wrote:
       | "Unlimited access to advanced data analysis (formerly known as
       | Code Interpreter)"
       | 
       | Code Interpreter was a pretty bad name (not exactly meaningful to
       | anyone who hasn't studied computer science), but what's the new
       | name? "advanced data analysis" isn't a name, it's a feature in a
       | bullet point.
        
         | ftxbro wrote:
         | Also I'd heard anecdotally on the internet (Ethan Mollick's
         | twitter I think) that 'code interpreter' was better than GPT 4
         | even for tasks that weren't code interpretation. Like it was
         | more like GPT 4.5. Maybe it was an experimental preview and
         | only enterprises are allowed to use it now. I never had access
         | anyway.
        
           | swores wrote:
           | I still have access in my $20/m non-Enterprise Pro account,
           | though it has indeed just updated its name from Code
           | Interpreter to Advanced Data Analysis. I haven't personally
           | noticed it being any better than standard GPT4 even for
           | generation of code that can't be run by it (ie non-Python
           | code).
        
             | shmoogy wrote:
             | I've been using it heavily for the last week - hopefully it
             | doesn't become enterprise only... it's very convenient to
             | pass it some examples and generate and test functions.
             | 
             | And it does seem "better" than standard 4 for normal tasks
        
               | swores wrote:
               | Ah I'd better start using it more again and see if I find
               | it better too
        
             | gcanyon wrote:
             | I also have a pro account, and I've looked for and not seen
             | code interpreter in my account. Am I just missing it?
        
         | z7 wrote:
         | In my account it now says "Advanced Data Analysis" instead of
         | "Code Interpreter". Looks like it is the new name.
        
       | warthog wrote:
       | Well the message in this video certainly did not age well:
       | https://www.youtube.com/watch?v=smHw9kEwcgM
       | 
       | TLDR: This might have just killed a LOT of startups
        
         | siva7 wrote:
         | Haha i also thought about that Y Combinator video. Yep, their
         | prediction didn't age well and it's becoming clear that openAI
         | is actually a direct competitor to most of the startups that
         | are using their api. Most "chat your own data" startups will be
         | killed by this move.
        
           | polishdude20 wrote:
           | Yeah like, if OpenAI can engineer chatGPT, they can sure as
           | hell engineer a lot of the apps built on top of chatGPT out
           | there.
        
           | ZoomerCretin wrote:
           | No different than Apple, then. A lot of value is provided to
           | customers by providing these features through a stable
           | organization not likely to shutter within 6 months, like
           | these startup "ChatGPT Wrappers". I hope that they are able
           | to make a respectable sum and pivot.
        
             | warthog wrote:
             | I think almost each startup is focusing on enterprise as it
             | sounds lucrative but selling to an enterprise might
             | qualitatively offset its benefits in some way (very
             | painful).
             | 
             | Personally I love what Evenup Law is doing. Basically find
             | a segment of the market that runs like small businesses and
             | that has a lot of repetitive tasks they have to do
             | themselves and go to them. Though I can't really think of
             | other segments like this :)
        
         | littlestymaar wrote:
         | Any startup that is using ChatGPT under the hood is just doing
         | market research for OpenAI for free. The same happened when
         | people started experimented with GPT3 for code completion,
         | right before being replaced by Copilot.
         | 
         | If you want to build an AI start-up and need a LLM, you _must_
         | use Llama or another model than you can control and host
         | yourself, anything else is basically suicide.
        
       | pama wrote:
       | Here is what SOC2 means. I hope this allows more companies to
       | offer GPT-4 to their employees.
       | https://en.wikipedia.org/wiki/System_and_Organization_Contro...
        
       | yieldcrv wrote:
       | The best thing is that this will probably reduce load on chatgpt
       | 4 meaningfully
        
       | simonw wrote:
       | I'd love to know how much of the preparation for this release was
       | hiring and training a sales team for it.
        
         | hubraumhugo wrote:
         | I had the same thought. Then I wondered why they even bother
         | with the manual sales process. Enterprises will buy it anyway.
        
       | dangerwill wrote:
       | "From engineers troubleshooting bugs, to data analysts clustering
       | free-form data, to finance analysts writing tricky spreadsheet
       | formulas--the use cases for ChatGPT Enterprise are plenty. It's
       | become a true enabler of productivity, with the dependable
       | security and data privacy controls we need."
       | 
       | I'm sorry but the financial analyst using chatGPT to write their
       | excel formulas for them, and explicitly calling out that it is
       | generating a formula that the analyst can't figure out on their
       | own ("tricky") is an incredibly alarming thing to call out as a
       | use case for chatGPT. I can't think of a lower reward, higher
       | risk task to throw chatGPT at than financial analysis /
       | reporting. Subtle differences in how things are reported in the
       | financial world can really matter
        
       | slowhadoken wrote:
       | Exposing your business to ChatGPT isn't an option at some
       | companies. Can you imagine the security risk at a company like
       | SpaceX or NASA.
        
       | saliagato wrote:
       | I thought that Microsoft was busy with enterprises yet OpenAI
       | announces a product for enterprises. I have a feeling that the
       | two do not get along
        
         | anonyfox wrote:
         | Or maybe they got urged to offset more operational costs - and
         | I would believe that companies already paying for Microsoft
         | things Wille happily pay for OpenAI in addition just to be
         | safe.
        
           | worrycue wrote:
           | Why the heck would anyone pay for the same thing from 2
           | different vendors?
        
             | aabhay wrote:
             | You've not worked with Salesforce or Oracle ISVs...
        
         | stuckinhell wrote:
         | isn't microsoft unable to scale their version of chatgpt 4 ?
        
         | datadrivenangel wrote:
         | Sell the same product under two brands at the same time?
         | 
         | Optimal business strategy. Makes it look like there's more
         | competition, and changes the decision from "do we use ChatGPT"
         | to "Which GPT vendor do we use?"
        
           | brigadier132 wrote:
           | Microsoft has a stake in OpenAI but they don't have a
           | controlling interest in it. What they got instead was
           | exclusive access to the models on Azure. So they benefit from
           | OpenAIs success but they benefit more from their own success
           | in the space and in a way they are competitors.
        
             | flangola7 wrote:
             | Exclusive access? Source?
        
         | Xeophon wrote:
         | It's pretty interesting to see both companies copying each
         | other. Bing Chat has GPT4 with Vision, Chat History and some
         | other goodies whereas OpenAI extends towards B2B.
        
         | ttul wrote:
         | Microsoft is primarily a mid-market company. They definitely
         | sell to enterprise as well, but what makes Microsoft truly
         | great is their ability to sell at enormous scale through a vast
         | network of partners to every SMB in the world.
         | 
         | OpenAI is a tiny company, relative to Microsoft. They can't
         | afford to build a giant partner network. At best, they can
         | offer a forum-supported set of products for the little guys and
         | a richly supported enterprise suite. But the middle market will
         | be Microsoft's to own, as they always do.
        
       | alexfromapex wrote:
       | Here we go, the first step of wringing profit out of the platform
       | has begun.
        
         | toomuchtodo wrote:
         | "Profit is like oxygen. You need it to survive, but if you
         | think that oxygen is the purpose of your life then you're
         | missing something."
        
       | Racing0461 wrote:
       | > unlimited higher-speed GPT-4 access
       | 
       | aka the nerfed version. high speed means the weights were relaxed
       | leading to faster output but worse reasoning and memory.
        
         | nostrebored wrote:
         | Or it means that the compute on the inference nodes is more
         | efficient? Or that it's tenanted in a way that decreases
         | oversaturation? Or you're getting programmatic improvements in
         | the inference layer that are being funded by the enterprise
         | spend?
        
         | slsii wrote:
         | What does it mean to "relax" weights and how does that speed up
         | output?
        
           | sigotirandolas wrote:
           | I assume he means quantization (e.g. scaling the weights from
           | 16-bit to 4-bit) and it speeds up the output by reducing the
           | amount of work done.
        
         | bagels wrote:
         | Do you have any references on this? I have only seen a lot of
         | speculation.
        
         | GaggiX wrote:
         | Or they have the priority on high-end hardware or even
         | dedicated one.
        
       | Exuma wrote:
       | > and the most powerful version of ChatGPT yet
       | 
       | ChatGPT just got snappier
        
       | aestetix wrote:
       | Will they include their weight tables with the price?
        
       | irrational wrote:
       | > The 80% statistic refers to the percentage of Fortune 500
       | companies with registered ChatGPT accounts, as determined by
       | accounts associated with corporate email domains.
       | 
       | Yeah... I have no doubt that people at my Fortune 100 company
       | tried it out with their corporate email domains. We have about
       | 80,000 employees, so it seems nearly impossible that somebody
       | wouldn't have tried it.
       | 
       | But, since then the policy has come down that nobody is allowed
       | to use any sort of AI/LLM without written authorization from both
       | Legal and someone at the C-suite level. The main concern is we
       | might inadvertently use someone else's IP without authorization.
       | 
       | I have no idea how many other Fortune companies have implemented
       | similar policies, but it does call the 80% number into question
       | for me.
        
         | vorticalbox wrote:
         | Want about locally running LLM?
        
           | irrational wrote:
           | The policy is specifically about third party AI/LLMs. I
           | assume a locally running LLM would be okay as long as it was
           | not trained by any material whatsoever external to the
           | company. That is, we could only use our own IP to train it.
        
         | idopmstuff wrote:
         | This is pretty standard for early-stage startups citing Fortune
         | 500 use. Not representative and fairly misleading, but it's
         | what they've done at most of the companies I've worked at.
        
       | epups wrote:
       | There is just nothing out there, open source or otherwise, that
       | even comes close to GPT-4. Therefore, the value proposition is
       | clear, this is providing you with access to the SOTA, 2x faster,
       | without restrictions.
       | 
       | I can actually see this saving a lot of time for employees (1-10%
       | maybe?), so the price is most likely calculated on that and a few
       | other factors. I think most big orgs will eat it like cake.
        
         | vorticalbox wrote:
         | That depends on the task. There are plenty of LLM that will run
         | locally that will do things like write emails, write a summary
         | of some text.
        
       | participant1138 wrote:
       | How is this different from using GPT api on Azure? I thought that
       | allowed you to keep you data corpus/documents private as well, ie
       | not get sent to their servers for training
        
         | [deleted]
        
         | tedsanders wrote:
         | One is a product. One is an API. Both can be useful, and both
         | can come with privacy guarantees.
        
       | 0xcde4c3db wrote:
       | Any hot takes on what the median application of this looks like
       | at a practical level? What springs to mind for me is replacing
       | the classic "weed-out" tiers of customer service like phone
       | trees, chatbots that are actually crappy FAQ/KB search engines,
       | and human representatives who are only allowed to follow a
       | script. On balance, this might even be a win for everyone
       | involved, given how profoundly terrible the status quo is. While
       | it's sort of terrifying at a philosophical level that we might be
       | mollifying the masses with an elaborate illusion, the perception
       | of engaging with an agent that's actually responsive to your
       | words might make the whole process at least incrementally less
       | hellish.
        
         | xkqd wrote:
         | > On balance, this might even be a win for everyone involved
         | 
         | Well, other than the millions of jobs at stake here. But I'm
         | sure they can just learn to code or become an engineer
        
       | victorsup wrote:
       | Anyone else noticed a significant decrease in the speed of all
       | GPT-4 services, like me?
        
       | wunderwuzzi23 wrote:
       | Wonder if for the Enterprise version they will fix the Image
       | Markdown Data Exfiltration vulnerability that's been known for a
       | while.
       | 
       | https://embracethered.com/blog/posts/2023/chatgpt-webpilot-d...
       | 
       | Seems like a no-go for companies if an attacker can steal stuff.
        
       | EGreg wrote:
       | _You own and control your business data in ChatGPT Enterprise. We
       | do not train on your business data or conversations, and our
       | models don't learn from your usage._
       | 
       | How can we be sure of this? Just take their word for it?
        
         | simias wrote:
         | How else?
         | 
         | If you notice that some of your confidential info made it into
         | next generations of the model, you'll be able to sue them for
         | big $$$ for breach of contract. That's a pretty good incentive
         | for them not to play stupid games with that.
        
       | fdeage wrote:
       | Interesting, but I am a bit disappointed that this release
       | doesn't include fine-tuning on an enterprise corpus of documents.
       | This only looks like a slightly more convenient and privacy-
       | friendly version of ChatGPT. Or am I missing something?
        
         | gopher_space wrote:
         | Retrieval Augmented Generation would be something to check out.
         | There was a good intro on the subject posted here a week or 3
         | ago.
        
           | internet101010 wrote:
           | This is one of the reasons we decided to go with Databricks.
           | Embed all the things for RAG during ETL.
        
         | idopmstuff wrote:
         | At the bottom, in their coming soon section: "Customization:
         | Securely extend ChatGPT's knowledge with your company data by
         | connecting the applications you already use"
        
           | fdeage wrote:
           | I saw it, but it only mentions "applications" (whatever that
           | means) and not bare documents. Does this mean companies might
           | be able to upload, say, PDFs, and fine-tune the model on
           | that?
        
             | mediaman wrote:
             | Pretty unlikely. Generally you don't use fine-tuning for
             | bare documents. You use retrieval augmented generation,
             | which usually involves vector similarity search.
             | 
             | Fine-tuning isn't great at learning knowledge. It's good at
             | adopting tone or format. For example, a chirpy helper bot,
             | or a bot that outputs specifically formatted JSON.
             | 
             | I also doubt they're going to have a great system for fine-
             | tuning. Successful fine-tuning requires some thought into
             | what the data looks like (bare docs won't work), at which
             | point you have technical people working on the project
             | anyway.
             | 
             | Their future connection system will probably be in the
             | format of API prompts to request data from an enterprise
             | system using their existing function fine-tuning feature.
             | They tried this already with plugins, and they didn't work
             | very well. Maybe they'll come up with a better system.
             | Generally this works better if you write your own simple
             | API for it to interface with which does a lot of the heavy
             | lifting to interface with the actual enterprise systems, so
             | the AI doesn't output garbled API requests so much.
        
               | kenjackson wrote:
               | When I first started working with GPT I was disappointed
               | in this. I thought like the previous commentor that I
               | could fine tune by adding documents and it would add it
               | to the "knowledge" of GPT. Instead I had to do what you
               | suggest is vector similarity search, and add the relevant
               | text to the prompt.
               | 
               | I do think an open line of research is some way for users
               | to just add arbitrary docs in an easy way to the LLM.
        
               | fdeage wrote:
               | Yes, this would definitely be a game changer for almost
               | all companies. Considering how huge the market is, I
               | guess it's pretty difficult to do, or it would be done
               | already.
               | 
               | I certainly don't expect a nice drag-and-drop interface
               | to put my Office files and then ask questions about it
               | coming in 2023. Maybe 2024?
        
               | tempestn wrote:
               | That would be the absolute game-changer. Something with
               | the "intelligence" of GPT-4, but it knows the contents of
               | all your stuff - your documents, project tracker, emails,
               | calendar, etc.
               | 
               | Unfortunately even if we do get this, I expect there will
               | be significant ecosystem lock-in. Like, I imagine
               | Microsoft is aiming for something like this, but you'd
               | need to use all their stuff.
        
               | r_thambapillai wrote:
               | There are great tools that do this already in a support-
               | multiple-ecosystems kind of way! I'm actually the CEO of
               | one of those tools: Credal.ai - which lets you point-and-
               | click connect accounts like O365, Google Workspace,
               | Slack, Confluence, e.t.c, and then you can use OpenAI,
               | Anthropic etc to chat/slack/teams/build apps drawing on
               | that contextual knowledge: all in a SOC 2 compliant way.
               | It does use a Retrieval-Augmented-Generation approach
               | (rather than fine tuning), but the core reason for that
               | is just that this tends to actually offer better results
               | for end users than fine tuning on the corpus of documents
               | anyway! Link: https://www.credal.ai/
        
               | jrpt wrote:
               | You can use https://Docalysis.com for that. Disclosure: I
               | am the founder of Docalysis.
        
             | idopmstuff wrote:
             | Yeah, I'll be curious to see what it means by this. Could
             | be a few things, I think:
             | 
             | - Codebases
             | 
             | - Documents (by way of connection to your
             | Box/SharePoint/GSuite account)
             | 
             | - Knowledgebases (I'm thinking of something like a Notion
             | here)
             | 
             | I'm really looking forward to seeing what they come up with
             | here, as I think this is a truly killer use case that will
             | push LLMs into mainstream enterprise usage. My company uses
             | Notion and has an enormous amount of information on there.
             | If I could ask it things like "Which customer is integrated
             | with tool X" (we keep a record of this on the customer page
             | in Notion) and get a correct response, that would be
             | immensely helpful to me. Similar with connecting a support
             | person to a knowledgebase of answers that becomes
             | incredibly easy to search.
        
           | xyst wrote:
           | Great now chatgpt can train on outdated documents from the
           | 2000s, provide more confusion to new people, and give us more
           | headaches
        
             | toyg wrote:
             | On the other hand, there was a lot of knowledge in those
             | documents that effectively got lost - while the relevant
             | tech is still underpinning half the world. For example:
             | DCOM/COM+.
        
             | figassis wrote:
             | I think this is actually of great value.
        
         | BoorishBears wrote:
         | You don't fine-tune on a corpus of documents to give the model
         | knowledge, you use retrieval.
         | 
         | They support uploading documents to it for that via that code
         | interpreter, and they're adding connectors to applications
         | where the documents live, not sure what more you're expecting.
        
           | fdeage wrote:
           | Yes, but what if they are very large documents that exceed
           | the maximum context size, say, a 200-page PDF? In that case
           | won't you be forced to do some form of fine-tuning, in order
           | to avoid a very slow/computationally expensive on-the-fly
           | retrieval?
           | 
           | Edit: spelling
        
             | Difwif wrote:
             | Typical retrieval methods break up documents into chunks
             | and perform semantic search on relevant chunks to answer
             | the question.
        
             | BoorishBears wrote:
             | Fine-tuning the LLM _in the way that you 're mentioning_ is
             | not even an option: as a practical rule fine-tuning the LLM
             | will let you do style transfer, but you knowledge recall
             | won't improve (there are edge cases, but none apply to
             | using ChatGPT)
             | 
             | That being said you _can_ use fine tuning to improve
             | retrieval, which indirectly improves recall. You can do
             | things like fine tune the model you 're getting embeddings
             | from, fine tune the LLM to craft queries that better match
             | a domain specific format, etc.
             | 
             | It won't replace the expensive on-the-fly retrieval but it
             | will let you be more accurate in your replies.
             | 
             | Also retrieval can be infinitely faster than inference
             | depending on the domain. In well defined domains you can
             | run old school full text search and leverage the LLMs skill
             | at crafting well thought out queries. In that case that
             | runs at the speed of your I/O.
        
             | jrpt wrote:
             | We have >200 page PDFs at https://docalysis.com/ and
             | there's on-the-fly retrieval. It's not more computationally
             | expensive than something like searching one's inbox (I'd
             | image you have more than 200 pages worth of emails in your
             | inbox).
        
       | ajhai wrote:
       | Explicitly calling out that they are not going to train on
       | enterprise's data and SOC2 compliance is going to put a lot of
       | the enterprises at ease and embrace ChatGPT in their business
       | processes.
       | 
       | From our discussions with enterprises (trying to sell our LLM
       | apps platform), we quickly learned how sensitive enterprises are
       | when it comes to sharing their data. In many of these
       | organizations, employees are already pasting a lot of sensitive
       | data into ChatGPT unless access to ChatGPT itself is restricted.
       | We know a few companies that ended up deploying chatbot-ui with
       | Azure's OpenAI offering since Azure claims to not use user's data
       | (https://learn.microsoft.com/en-us/legal/cognitive-
       | services/o...).
       | 
       | We ended up adding support for Azure's OpenAI offering to our
       | platform as well as open-source our engine to support on-prem
       | deployments (LLMStack - https://github.com/trypromptly/LLMStack)
       | to deal with the privacy concerns these enterprises have.
        
         | amelius wrote:
         | > is going to put a lot of the enterprises at ease and embrace
         | ChatGPT in their business processes.
         | 
         | Except many companies deal with data of other companies, and
         | these companies do not allow the sharing of data.
        
         | dools wrote:
         | > we quickly learned how sensitive enterprises are when it
         | comes to sharing their data
         | 
         | "They're huge pussies when it comes to security" - Jan the
         | Man[0]
         | 
         | [0] https://memes.getyarn.io/yarn-
         | clip/b3fc68bb-5b53-456d-aec5-4...
        
         | mveertu wrote:
         | So, how do you plan to commercialize your product? I have
         | noticed tons of chatbot cloud-based app providers built on top
         | of ChatGPT API, Azure API (ask users to provide their API key).
         | Enterprises will still be very wary of putting their data on
         | these multi-tenant platforms. I feel that even if there is
         | encryption that's not going to be enough. This screams for
         | virtual private LLM stacks for enterprises (the only way to
         | fully isolate).
        
           | ajhai wrote:
           | We have a cloud offering at https://trypromptly.com. We do
           | offer enterprises the ability to host their own vector
           | database to maintain control of their data. We also support
           | interacting with open source LLMs from the platform.
           | Enterprises can bring up https://github.com/go-
           | skynet/LocalAI, run Llama or others and connect to them from
           | their Promptly LLM apps.
           | 
           | We also provide support and some premium processors for
           | enterprise on-prem deployments.
        
             | mveertu wrote:
             | Enterprises can bring up https://github.com/go-
             | skynet/LocalAI, run Llama or others and connect to them
             | from their Promptly LLM apps - So spin up GPU instances and
             | host whatever model in their VPC and it connects to your
             | SaaS stack? What are they paying you for in this scenario?
        
       | happytiger wrote:
       | Has there been some resolution to the copyright issues that I'm
       | no aware of? In my conversations with execs that's been a serious
       | concern -- basically that generated output from AI systems can't
       | be reliably protected.
       | 
       | I refer to the concept that output could be deemed free of
       | copyright because they are not created by a human author, or that
       | derivative works can be potential liabilities because they
       | resemble works that were used for training data or whatnot (and
       | we have no idea what was really used to train).
       | 
       | There was the recent court decision confirming:
       | 
       | https://www.cooley.com/news/insight/2023/2023-08-24-district...
       | 
       | Seems odd to start making AI systems a data-at-the-center-of-the-
       | company technology when such basic issues exist.
       | 
       | Is this not a concern anymore?
        
         | weinzierl wrote:
         | So, the point that AI output might be a derivative work of its
         | input is finally dead? I thought what execs were really afraid
         | of was the risk that copyright holders will come around after
         | some time and claim rights on the AI output even if it is only
         | vaguely similar to the copyrighted work.
        
         | paxys wrote:
         | Copyright is only an issue for creative works. If a company is
         | automating a customer service chat or an online ordering
         | process or a random marketing page/PR announcement or something
         | of that sort via ChatGPT why would they even care?
        
           | heisenbit wrote:
           | If the code that implements the automation resembles too
           | closely copyrighted code it violates the rights of the
           | creator. But who would know what happens behind corporate
           | walls.
        
         | CobrastanJorji wrote:
         | I think they're basically taking the "Uber" strategy here:
         | primary business is probably illegal, but if they do it hard
         | enough and at enough scale and create enough value for enough
         | big companies, then they become big enough to either get
         | regulations changed or strong enough to weather lawsuits and
         | prosecutions. Their copyright fig leaf is perhaps analogous to
         | Uber's "it's not a taxi service so we don't need taxi
         | medallions" fig leaf.
        
           | cmiles74 wrote:
           | Or make as much money as you can, while you can.
        
           | choppaface wrote:
           | Might be closer to the "Google" strategy, as Google also
           | faced significant litigation with image search and publishers
           | did a ton to shut down their large investment in Google
           | Books. Moreover, Uber flaunted their non-compliance in
           | contrast to sama testifying before Congress and trying to
           | initiate regulatory capture early.
           | 
           | There's undeniably similar amounts of greed, although TK
           | seems to genuinely enjoy being a bully versus sama is more of
           | a futurist.
        
         | og_kalu wrote:
         | >There was the recent court decision confirming:
         | https://www.cooley.com/news/insight/2023/2023-08-24-district...
         | 
         | This decision seems specifically about whether the ai itself
         | can hold the copyright as work for hire, not whether output
         | generated by ML models can be copyrighted.
        
         | [deleted]
        
         | jay_kyburz wrote:
         | If you are concerned somebody will steal your IP or infringe
         | your copyright, they first have to be 100% sure that some text
         | was indeed written by an AI, and only an AI.
         | 
         | In practice, if you suspect something was written by an AI and
         | are considering copying it, you would be safer to just ask an
         | AI to write you one as well.
        
         | stale2002 wrote:
         | The copyright ruling that you are referencing is being
         | significantly misunderstood.
         | 
         | The only think that the ruling said is basically that the most
         | low effort version of AI does not have copyright protection.
         | 
         | IE, if you just go into midjourney and type in "super cool
         | anime girl!" and thats it, the results are not protected.
         | 
         | But there is so much more you can do. For example, you can
         | generate an image, and then change it. The resulting images
         | would be protected due to you adding the human input to it.
        
         | caesil wrote:
         | You're referring to a Copyright Office administrative ruling.
         | 
         | It's a pretty strange ruling at odds with precedent, and it has
         | not been tested in court.
         | 
         | Traditionally all that's required for copyrightability is a
         | "minimal creative spark", i.e. the barest evidence that some
         | human creativity was involved in creating the work. There
         | really hasn't traditionally been any lower bound on how
         | "minimal" the "spark" need be -- flick a dot of paint at a
         | canvas, snap a photo without looking at what you're
         | photographing, it doesn't matter as long as a human initiated
         | the work somehow.
         | 
         | However, the Copyright Office contends that AI-generated text
         | and images do not contain a minimal creative spark:
         | 
         | https://www.copyright.gov/ai/ai_policy_guidance.pdf
         | 
         | This is obviously asinine. Typing in "a smiling dolphin" on
         | Midjourney and getting an image of a smiling dolphin is clearly
         | not a program "operat[ing] randomly or automatically without
         | any creative input or intervention from a human author".
         | 
         | If our laws have meaning, it will be overruled in court.
         | 
         | Of course, judges are also susceptible to the marketing-driven
         | idea that Artificial Intelligence is a separate being, a
         | translucent stock photo robot with glowing blue wiring that
         | thinks up ideas independently instead of software you must run
         | with a creative input. So there's no guarantee sanity will
         | prevail.
        
           | choppaface wrote:
           | Not so much copyrighting generated output, it's more about to
           | what extent training is fair use as well as when the algo
           | spits out an exact copy of training data.
        
             | caesil wrote:
             | That's a separate issue. What I linked above is an opinion
             | specifically on whether generated output is copyrightable.
        
         | graypegg wrote:
         | If something is in the public domain, and you create something
         | new with it, you have the rights to the new work, there isn't
         | any sort of alike-licensing on public domain works in most-if-
         | not-all jurisdictions.
         | 
         | This is why music engravers can sell entire books of classical
         | sheet music from *public domain* works. They become the owners
         | of that specific expression. (Their arrangement, font choice,
         | page layout, etc)
         | 
         | If the AI content is public domain, and the work it generates
         | is incorporated into some other work, the entity doing the
         | incorporation owns the work. It's not permanently tainted or
         | something as far as I know.
        
         | judge2020 wrote:
         | We know that stuff generated from AI content is generally not
         | your copyright, but there isn't any current ruling on whether
         | or not you're free to use copyright-protected content to train
         | a model in a legal way (e.g. fair use since it's been 'learned
         | on', not directly copied). Some companies are using OpenAI gpt
         | stuff, which is almost certainly trained on tons of copyrighted
         | content and academic content, while other companies are being
         | more cautious and soliciting models specifically trained on
         | public domain/licensed content.
        
       | xyst wrote:
       | Can't wait to speak with a ChatGPT representative!!
       | 
       | me: "I would like to close my account"
       | 
       | chatgpt: "I'm sorry, did you mean open or close an account"
       | 
       | me: " close account"
       | 
       | Chatgpt: "okay what type of account would you like to open"
       | 
       | Me: "fuck you"
       | 
       | Chatgpt: "I'm sorry I do not recognize that account type. Please
       | repeat"
       | 
       | me: "I would like to close my account"
       | 
       | Chatgpt: "okay i can close out your account,please verify
       | identity"
       | 
       | me: <identity phrase>
       | 
       | Chatgpt: I'm sorry that's incorrect. Your account has been locked
       | indefinitely until it can be reviewed manually. Please wait for
       | 5-10 business days
        
       | hubraumhugo wrote:
       | Interesting that they offer GPT-4 32k in the enterprise version
       | while only giving very few people API access to it. I guess we'll
       | see that more often in the future.
        
         | ftkftk wrote:
         | It's expensive to run.
        
           | tinco wrote:
           | So why not put a price on it?
        
       | muttantt wrote:
       | That was quick. Companies offering APIs end up competing with
       | their developer base that built end-user facing products. Another
       | example is Twilio that offers retail-ready products now such as
       | Studio, prebuilt Flex, etc.
        
       | aaronharnly wrote:
       | We (like many other companies) have deployed an internal UI[1]
       | that integrates with our SSO and makes calls via the OpenAI API,
       | which has better data privacy terms than the ChatGPT website.
       | 
       | We'd be potentially very interested in an official internal-
       | facing ChatGPT, with a caution that the economics of the
       | consumption-based model have so far been advantageous to us,
       | rather than a flat fee per user per month. I can say that based
       | on current usage, we are not spending anywhere close to $20 per
       | user per month across all of our staff.
       | 
       | [1] We used this: https://github.com/dotneet/smart-chatbot-ui
        
       | rrgok wrote:
       | Is it really so hard for companies to provide a price range for
       | Enterprise plan publicly on the pricing page?
       | 
       | Why can't I, as an individual, have the same features of an
       | Enterprise plan?
       | 
       | What is the logic behind this practice other than profit
       | maximization?
       | 
       | I'm willing to pay more to have unlimited high-speed GTP4 and
       | Longer inputs with 32k token context.
       | 
       | EDIT: since I'm getting a lot of replies. Genuine question: how
       | should I move to get a reasonable price as an individual for
       | unlimited high-speed gpt4 and longer token context?
        
         | HtmlProgrammer wrote:
         | Because the price is so big they don't want to scare you off
         | with sticker shock, then they offer you a 85% discount to get
         | you over the line
        
         | MathMonkeyMan wrote:
         | > What is the logic behind this practice other than profit
         | maximization?
         | 
         | I don't know, but I can't imagine any other logic.
         | 
         | Maybe posting the price they'd like to charge would scare away
         | almost all interested parties.
         | 
         | Maybe the price they charge you depends more on how much money
         | they think you have than it does on a market's "decision" on
         | what the product is worth.
        
         | pavlov wrote:
         | _> "I 'm willing to pay more"_
         | 
         | How much more? That's the question that "talk to us" enterprise
         | pricing is trying to answer.
        
           | toddmorey wrote:
           | This is really well put!
        
           | zaat wrote:
           | I'm sure that's the correct answer, and that their very best
           | was invested in analyzing the max profit strategy (as they
           | should).
           | 
           | What I'm wondering if it means that the minimal price they
           | can offer the service with at profit, is likely to be too
           | steep for anyone like me, who interpret "talk to us" as the
           | online equivalent of showing him the door. The other
           | explanation I see is that there's not many in the camp of
           | users who react to "talk to us" button by closing the tab
           | instead of a deal, but I find that implausible.
        
             | wpietri wrote:
             | > I'm wondering if it means that the minimal price they can
             | offer the service with at profit, is likely to be too steep
             | for anyone like me
             | 
             | I think the answer to that is "no". The problem is that
             | they don't want to reveal the minimal price to their
             | initial round of customers.
             | 
             | There are two basic ways you can think about pricing: cost-
             | plus and value-minus. We programmers tend to like the
             | former because it's clear, rational, and simple. But if
             | you've got something of unknown value and want to maximize
             | income, the latter can be much more appealing.
             | 
             | The "talk to sales" approach means they're going to enter
             | into a process where they find the people who can get the
             | most out of the service. They're going to try to figure out
             | the total value added by the service. And they'll negotiate
             | down from there. (Or possibly up; somebody once said the
             | goal of Oracle was to take all your money for their server
             | software, and then another $50k/year for support.)
             | 
             | Eventually, once they've figured out the value landscape,
             | they'll probably come back for users like you, creating a
             | commoditized product offering that's limited in ways that
             | you don't care about but high-dollar customers can't live
             | without. That will be closer to cost-plus. For example,
             | note Github's pricing, which varies by more than 10x
             | depending on what they think they can squeeze you for:
             | https://github.com/pricing
        
         | danielvaughn wrote:
         | Because it's often heavily negotiated. At the enterprise level,
         | custom requests are entertained, and teams can spend weeks or
         | months building bespoke features for a single client. So yeah,
         | it's kinda fundamentally impossible.
        
           | phillipcarter wrote:
           | Oh yes. I'm willing to bet that it involves things like
           | progressive discounts on # of tokens or # of seats, etc etc.
           | This is just how you get access to the big bucks.
        
         | FredPret wrote:
         | Profit maximization is why ChatGPT even exists - why be
         | surprised when that's the end result?
        
         | capableweb wrote:
         | > What is the logic behind this practice other than profit
         | maximization?
         | 
         | Why would it be something else than profit maximization? It's a
         | for-profit company, with stakeholders who want to maximize the
         | possible profits coming from it, seems simple enough to grok,
         | especially for users on Startup News Hacker News.
        
         | toddmorey wrote:
         | Because the truth is, each deal is custom packaged and priced
         | for each enterprise. It's all negotiated pricing. Call it
         | "value pricing" or whatever you want, prices are set at the
         | tolerance level of each company. A price-sensitive enterprise
         | might pay $50k while another company won't blink at $80k for
         | essentially the same services.
        
         | xgl5k wrote:
         | they should just create another consumer tier with those. there
         | shouldn't be a need for individuals to want the Enterprise
         | plan.
        
         | [deleted]
        
         | [deleted]
        
         | alexb_ wrote:
         | >other than profit maximization
         | 
         | Are you aware what the entire point of a business is?
        
         | sarnowski wrote:
         | If it goes to the direction of Microsoft Copilot, then you can
         | check out the recent announcement. Microsoft currently
         | estimates that 30/user/month is a good list price to get
         | ,,ChatGPT with all your business context" to your employees.
         | 
         | https://blogs.microsoft.com/blog/2023/07/18/furthering-our-a...
        
         | travisjungroth wrote:
         | > What is the logic behind this practice other than profit
         | maximization?
         | 
         | That's a real big "other than"...
        
         | fourseventy wrote:
         | These enterprise deals will be $100k annually at least.
        
           | danielvaughn wrote:
           | At least. I once spent months negotiating an enterprise deal
           | that was initially quoted at $1M annually. We talked them
           | down but it took a long time.
        
             | fourseventy wrote:
             | Wow. What type of software was it?
        
       | Lio wrote:
       | What I got from this is that if I use Klarna then they'll share
       | any related information with OpenAI. This is not what I want.
        
       | tommek4077 wrote:
       | Do you get uncensored answers with this? Oftentimes it produces
       | false positives for my workload, and i dont care for feedback
       | buttons during work hours.
        
         | distantsounds wrote:
         | If you're relying on AI to do your job, you're not doing a good
         | job. Figure it out yourself.
        
           | rokkamokka wrote:
           | But it's a tool like any other. This is like saying "if
           | you're relying on an IDE you're not doing a good job"
        
             | distantsounds wrote:
             | an IDE aids you in programming, it doesn't do the
             | programming for you. do you really need this distinction
             | explained to you?
        
               | tommek4077 wrote:
               | I am sorry, i will start writing on my clay tablet right
               | away.
        
           | 1xb3l wrote:
           | [dead]
        
           | vorticalbox wrote:
           | I use AI all the time for my job because why waste time
           | writing some JsDocs, pull request etc. When LLM are so great
           | at writing summeries?
        
       | ojosilva wrote:
       | Clicked on ChatGPT / Compare ChatGPT plans / Enterprise ...
       | 
       | > Contact sales
       | 
       | Oops. Scary.
       | 
       | I'm missing the Teams plan: transparent pricing with a common
       | admin console for our team. Yes, fast GPT-4, 32k context,
       | templates, API credits... they're all very nice-to-haves, but
       | just the common company console would be crucial for onboarding
       | and scaling-up our team and needs without the big-bang "enter-
       | pricey" stuff.
        
         | crooked-v wrote:
         | Any "Contact sales" stuff has just been an instant "no" at any
         | company I've ever worked at, because that always means that the
         | numbers are always too high to include in the budget unless
         | it's a directive coming down directly from the top.
        
           | dahwolf wrote:
           | It depends. We once were quoted 300K/year by a SaaS company.
           | We replied by saying that our budget is 20K. "Fair enough,
           | we'll take that".
        
             | thoughtFrame wrote:
             | I don't know if that's a smart way to bypass pesky hidden
             | information negotiations and suss out other party's upper
             | bound or a really stupid way to do business...
        
               | dahwolf wrote:
               | Their decision makes sense, in a weird way.
               | 
               | A lot of value in some SaaS apps is in the initial
               | investment it took to build it, not in the cost to host a
               | customer's assets.
               | 
               | If the runtime costs of a new customer are negligible,
               | would you rather have 0K or 20K?
        
           | exizt88 wrote:
           | That's where directives for enterprise contracts usually come
           | from. I'm sure they won't even talk to anyone not willing to
           | pay $100k+ per year. Salesforce's AI Cloud starts at $365k a
           | year.
        
             | Gene_Parmesan wrote:
             | > I'm sure they won't even talk to anyone not willing to
             | pay $100k+ per year.
             | 
             | Wouldn't surprise me. We had a vendor whose product we had
             | used at relatively reasonable rates for multiple years
             | suddenly have a pricing model change. It would have seen
             | our cost go from $10k/yr to $100k/yr. As a small nonprofit
             | we tried to engage them in any sort of negotiation but the
             | response was essentially a curt "too bad." Luckily a
             | different vendor with a similar product was more than happy
             | to take our $10k.
        
         | [deleted]
        
         | ttul wrote:
         | The jump to enterprise pricing suggests that they have enormous
         | enterprise demand and don't need to bother with SMB "teams"
         | pricing. I suspect OpenAI is leaving the SMB part up to
         | Microsoft to figure out, since that's Microsoft's forte through
         | their enormous partner program.
        
         | ilaksh wrote:
         | It makes it impossible to access for bootstrapping, at least
         | for people who have budget constraints. Which is just reality,
         | it's a scarce resource and I appreciate what they have made
         | available so far inexpensively.
         | 
         | But hopefully it does give a little more motivation to all of
         | the other great work going on with open models to keep trying
         | to catch up.
        
       | Pandabob wrote:
       | This seems really cool, but I guess most companies in the EU
       | won't dare to use this due to GDPR concerns and instead will opt-
       | in for the Azure version, where you can choose to use GPT-models
       | that are hosted in Azure's EU servers.
        
         | simonw wrote:
         | I'd be surprised if OpenAI didn't offer "and we'll run it on EU
         | servers for you, too" as part of a $1m+ deal.
         | 
         | Surprising it didn't make the initial launch announcement
         | though.
        
         | brookladdy wrote:
         | Currently, GPT-4 is not even available anymore for new
         | customers at the only EU location they offer (France Central).
        
       | llmllmllm wrote:
       | Interesting that they're still centered around Chat as the
       | interface, with https://flowch.ai (our product) we're building it
       | much more around projects and reports, which we think is often
       | more suitable for businesses.
       | 
       | We're going after some of these use cases:
       | 
       | Want a daily email with all the latest news from your custom data
       | source (or Google) for a topic? How about parsing custom data and
       | scores from your datasets using prompts with all the complicated
       | bits handled for you, then downloading as a simple CSV? Or even
       | simply bulk generating content, such as generating Press Releases
       | from your documents?
       | 
       | All easy with FlowChai :)
       | 
       | I think there's room for many different options in this space,
       | whether that be Personal, Small Business or Enterprise.
       | 
       | Here's an example of automatically scraped arXiv papers on GPT4,
       | turned into a report (with sources) generated by GPT4:
       | https://flowch.ai/shared/6107d220-4e19-4bdc-a566-e84e8a60565...
        
         | azinman2 wrote:
         | Some feedback (it's clear you're just pitching FlowChai, but
         | that's ok its HN):
         | 
         | I quick scrolled through your webpage and had no idea what it
         | was. Extremely text heavy, and generic images that didn't
         | communicate anything. I wanted to know what the product LOOKED
         | like, especially as you're describing the difference between it
         | and the chat interface of OpenAI.
         | 
         | I think you updated your comment (or I missed it) with the link
         | to a "report" - it looked just like the output of one of the
         | text bubbles except it had some (source) links (which I think
         | Bing does as well)? It didn't seem all that different to me.
        
           | llmllmllm wrote:
           | Very fair, we have demo videos, guides etc planned for the
           | next week or so. As it's a tool that can do many things it's
           | hard to describe. Still a lot to do :)
           | 
           | In terms of what makes the report different from Bing: this
           | could be any source of data: scraped from the web, search,
           | API upload, file upload etc, so there's a lot more power
           | there. Also, it's not just one off reports, there's
           | automation there which would allow for example a weekly
           | report on the latest papers on GPT4 (or whatever you're
           | interested in).
        
         | notavalleyman wrote:
         | Doesn't seem to be in a usable state yet. I created an account
         | and realised there's not actually any features to play with
         | yet. I gave a URL for scheduled reports but I cannot configure
         | anything about them.
         | 
         | You didn't offer me any way to delete my account and remove the
         | email address I saved in your system. I hope you don't start
         | sending me emails, after not giving me an ability to delete the
         | account
        
       | dangerwill wrote:
       | Given our industry's long history of lying about data retention
       | and usage and openai's opaqueness and Sam Altman's specific
       | sleaziness I wouldn't trust this privacy statement one bit. But I
       | know the statement will be enough for corporate "due diligence".
       | 
       | Which is a shame because an actual audit of the live training
       | data of these systems could be possible, albeit imperfect. Setup
       | an independent third party audit firm that gets daily access to a
       | randomly chosen slice of the training data and check its source.
       | Something along those lines would give some actual teeth to these
       | statements about data privacy or data segmentation.
        
       | thih9 wrote:
       | As we increase our reliance on AI in the work context, what about
       | AI works not being copyrightable?
        
       | vyrotek wrote:
       | Any correlation between this and the sudden disappearance of this
       | repo?
       | 
       | https://github.com/microsoft/azurechatgpt
       | 
       | Past discussion:
       | 
       | https://news.ycombinator.com/item?id=37112741
        
         | phillipcarter wrote:
         | No relation. That project was just a reference implementation
         | of "chat over your data via the /chat API" with a really
         | misleading name.
        
         | [deleted]
        
         | jmorgan wrote:
         | Seemed like a great project. Hope to see it come back!
         | 
         | There are some great open-source projects in this space - not
         | quite the same - many are focused on local LLMs like Llama2 or
         | Code Llama which was released last week:
         | 
         | - https://github.com/jmorganca/ollama (download & run LLMs
         | locally - I'm a maintainer)
         | 
         | - https://github.com/simonw/llm (access LLMs from the cli -
         | cloud and local)
         | 
         | - https://github.com/oobabooga/text-generation-webui (a web ui
         | w/ different backends)
         | 
         | - https://github.com/ggerganov/llama.cpp (fast local LLM
         | runner)
         | 
         | - https://github.com/go-skynet/LocalAI (has an openai-
         | compatible api)
        
           | jacquesm wrote:
           | Ollama is very neat. Given how compressible the models are is
           | there any work being done on using them in some kind of
           | compressed format other than reducing the word size?
        
             | nacs wrote:
             | Yes, AutoGPTQ supports this (8, 4, 3, and 2 bit
             | quantization/"compression" of weights + inference).
             | 
             | GPTQ has also been merged into Transformers library
             | recently ( https://huggingface.co/blog/gptq-integration ).
             | 
             | GGML quantization format used by llama.cpp also supports
             | (8,6,5,4,3, and 2 bit quantization).
        
               | jacquesm wrote:
               | 'other than'...
        
           | brucethemoose2 wrote:
           | Also https://github.com/LostRuins/koboldcpp
           | 
           | The UI is relatively mature, as it predates llama. It
           | includes upstream llama.cpp PRs, integrated AI horde support,
           | lots of sampling tuning knobs, easy gpu/cpu offloading, and
           | its basically dependency free.
        
           | ajhai wrote:
           | Adding to the list:
           | 
           | - https://github.com/trypromptly/LLMStack (build and run apps
           | locally with LocalAI support - I'm a maintainer)
        
         | CodeCompost wrote:
         | It seems to have been transferred?
         | 
         | https://github.com/matijagrcic/azurechatgpt
        
           | judge2020 wrote:
           | If it was transferred, the /microsoft link would have
           | redirected to it. Instead, it's the git commits re-uploaded
           | to another repo - so the commits are the same but it didn't
           | transfer past issues, discussions or PRs
           | https://github.com/matijagrcic/azurechatgpt/pulls?q=
        
             | jmorgan wrote:
             | I believe it would have also kept its stars, issues and
             | other data.
        
         | sdesol wrote:
         | All activity stopped a couple of weeks ago. It was extremely
         | active and had close to 5 thousand stars/watch events before it
         | was removed/made private. Unfortunately I never got around to
         | indexing the code. You can find the insights at
         | https://devboard.gitsense.com/microsoft/azurechatgpt
         | 
         | Full Disclosure: This is my tool
        
         | thund wrote:
         | maybe this? https://github.com/microsoft/chat-copilot
        
         | paxys wrote:
         | Based on past discussion, my guess is it was removed because
         | the name and description were wildly misleading. People starred
         | it because it was a repo published by Microsoft called
         | "azurechatgpt", but all it contained was a sample frontend UI
         | for a chat bot which could talk to the OpenAI API.
        
       | ankit219 wrote:
       | Curious what the latency would be using OpenAI service vs using a
       | hosted LLM like Llama2 on premise? GPT4 is slow and given the
       | retrieval step (coming soon) across all of companies corpus' of
       | data, it could be even slower as it is a sequential step. (Asking
       | more as I am curious at this point)
       | 
       | Another question is does the latency even matter? Today, same
       | employees ping their colleagues for answers and wait for hours
       | till get a reply. GPT would be faster (and likely more accurate)
       | in most of those cases.
        
       | huijzer wrote:
       | I used to be super hyped about ChatGPT and the productivity they
       | could deliver. However the large amount of persistent bugs in
       | their interface has convinced me otherwise.
        
         | simonw wrote:
         | Bugs in the interface?
        
           | esafak wrote:
           | In the response, no doubt.
        
       ___________________________________________________________________
       (page generated 2023-08-28 23:00 UTC)