throwup238 2 days ago

I’m confused by this news story and the response here. No one seems to understand OpenAI’s corporate structure or non profits at all.

My understanding: OpenAI follows the same model Mozilla does. The nonprofit has owned a for-profit corporation called OpenAI Global, LLC that pays taxes on any revenue that isn’t directly in service of their mission (in a very narrow sense based on judicial precedent) since 2019 [1]. In Mozilla’s case that’s the revenue they make from making Google the default search engine and in OpenAI’s case that’s all their ChatGPT and API revenue. The vast majority (all?) engineers work for the for-profit and always have. The vast majority (all?) revenue goes through the for-profit which pays taxes on that revenue minus the usual business deductions. The only money that goes to the nonprofit tax-free are donations. Everything else is taxed at least once at the for-profit corporation. Almost every nonprofit that raises revenue outside of donations has to be structured more or less this way to pay taxes. They don’t get to just take any taxable revenue stream and declare it tax free.

All OpenAI is doing here is decoupling ownership of the for-profit entity from the nonprofit. They’re allowing the for profit to create more shares and distribute them to entities other than the non-profit. Or am I completely misinformed?

[1] https://en.wikipedia.org/wiki/OpenAI#2019:_Transition_from_n...

  • throwaway314155 2 days ago

    It's about the narrative they tried to create. The spin. It doesn't matter much if they were technically behaving as a for-profit entity previously. What matters is that they wanted the public (and likely, their talent) to _think_ that they weren't even interested in making a profit as this would be a philosophical threat to the notion of any sort of impartial or even hopefully benevolent originator of AGI (a goal which is laid plainly in their mission statement).

    As you've realized, this should have been (and was) obvious for a long time. But that doesn't make it any less hypocritical or headline worthy.

    • cdchn 2 days ago

      >What matters is that they wanted the public (and likely, their talent) to _think_ that they weren't even interested in making a profit as this would be a philosophical threat to the notion of any sort of impartial or even hopefully benevolent originator of AGI (a goal which is laid plainly in their mission statement)

      And now they want to cast off any pretense of that former altruistic yolk now that they have a new, better raison d'etre to attract talent: making absolutely unparalleled stacks of cash.

      • nejkbnek 2 days ago

        Just FYI it's "yoke" when it's a burden, "yolk" when it's an egg

        • salad-tycoon a day ago

          Maybe, they do seem to have egg on their face.

      • kombookcha a day ago

        It sort of remains to be seen whether they can actually make that cash - they're no longer the only game in town, and while they have an obvious adoption and brand name recognition advantage in their industry, they've also been running real hot on investor funding on the assumption that more processing power is what it's gonna take for them to stay competitive and continue improving. But they're gonna have to fight Google and Microsoft on that front. If there is a general plateau coming up in all of these models, or they fail to keep pace with their extremely well-supplied competition, it might not be so easy to convert their position into money.

      • hackernewds 2 days ago

        Note Elon made a significant "donation" early into OpenAI given their non-profit designation and intentions, in return receiving zero equity. The donation was also received tax free

        • stonogo a day ago

          He received a seat on the board, which he surrendered when Tesla started investing in AI.

    • pj_mukh 2 days ago

      Occam's razor: I think Sam's personal narrative is the correct one. He built a non-profit that took off in a way that he didn't expect it and now a for-profit is the best way to run the lightning they've caught.

      In terms of profit, AFAICT, Sam doesn't have designs on building extra large yachts and his own space agency but what he wants is to be the one at the stead of building what he considers is world-changing tech. One could rationally call this power-hungry but one could also rationally call this just helicopter parenting of a tech you've helped built. And for that a for-profit that is allowed to maximize profits to re-invest in the tech is the optimal setup (esp if all the competitors are doing the same)

      Is this a different org than when it started? Yes. Was this a dupe from the beginning? I don't think so.

      "But why can't he have a more worldly-aligned board looking over his shoulder?"

      Because we live in California and have a bad taste for governance by committee or worse: governance by constant non-representative democracy (see: Housing).

      If this now completely comes off the wheels, I still think Congressional action can be a stopgap, but atleast for now, this restructure makes sense to me.

      • feoren 2 days ago

        Sam: "I'm not in it for the money. I have principles."

        World: "But what if it was like, a lot of money?"

        Sam: "Oh alright you convinced me. Fuck my principles."

        • pj_mukh 2 days ago

          What do you do with a a lot of money past a point? A corporate controlled AGI being just a stop on the way to build another private space agency seems like a...letdown.

          • BrianHenryIE a day ago

            Let me recommend my favourite TikTok/YouTube channel of late, The Forest Jar

            what annoys you?: https://www.youtube.com/watch?v=k9Le1ibX2zY

            > if you went to a group of investors and pitched a board game where the winners get space ships and the losers die, they'd call you crazy. But if you suggested to those same investors that perhaps we shouldn't organize our entire society that way, they'd call you crazy.

            • pj_mukh a day ago

              "where the winners get space ships and the losers die,"

              The Social Security budget is $1.4 Trillion, just the federal welfare budget is >$1Trillion (not including state budgets), and then there's medicare. Meanwhile, the NASA budget is <$25B (with SpaceX's operating budget and profits being a fraction of that)

              I wish we lived in that simple of a world. But we don't.

              • permo-w a day ago

                this is a complete non-sequitur. the US social security budget does not go to one person or a small group of oligarchs

          • latexr a day ago

            > What do you do with a a lot of money past a point?

            Feed the hungry. House the homeless. Give away money unconditionally to those in need. Build hospitals in poor countries. Fight disinformation on crucial topics (such as climate change). Provide disaster relief. Not build more power hungry technology that exacerbates our current problems.

            Do literally anything positive for another person, that does not harm others.

            The list is pretty big when one isn’t selfish; there’s no law forcing anyone to build space agencies.

            A lack of imagination is not an excuse.

            • pj_mukh a day ago

              "Feed the hungry. House the homeless"

              Funnel $10B in housing to Los Angeles and you'll build less than 100 units of housing, because the inflationary push of that money would balloon the cost of per unit housing. I don't want to imagine the effect of that on middle class housing.

              Funnel $10B of food to xyz famine region and you've undercut local farmers for generations. Happens all the time [1]. And that's assuming you can get the aid past local corruption.

              These problems aren't as simple as people assume, and I'm low-key happy young naive Billionaire's are avoiding these issues instead of trying to throw their weight around.

              FWIW: Sam's already funneled a bunch of money into green energy production[2].

              [1]: https://haitisolidarity.net/in-the-news/how-the-united-state...

              [2]: https://www.cnbc.com/2024/09/25/sam-altman-backed-nuclear-st...

              • exceptione a day ago

                > Funnel $10B in housing to Los Angeles and you'll build less than 100 units of housing, because the inflationary push of that money would balloon the cost of per unit housing. I don't want to imagine the effect of that on middle class housing.

                Doesn't make sense to me. An uptick in construction work will not be an inflation balloon. More disposable income doesn't mean 1:1 more spending.

                If you build a lot of (social) housing, you put at worst a lot of people a roof above their head.

                Families having less financial stress might lower crime rate and improve children school scores. They might save to start businesses or find their other talents.

                For some, this might be a downside tough. It makes workers more educated, healthier, more stable, less desperate and less dependent on bosses, plus they might be less angry so politically less exploitable too.

                • pj_mukh a day ago

                  "An uptick in construction work will not be an inflation balloon. "

                  There's a massive shortage in construction workers [1], so yes there will be? The few construction workers we do have can demand higher wages (yay!) but then will they be outbidding other mid-income folks for housing with those increased wages? Sounds like an inflation spiral to me.

                  My statement wasn't against social housing, I love social housing. We just haven't cracked the code in scaling housing (and subsequent maintenance) yet. And the problem is about 80% political will, billionaire cash is useless here.

                  [1]: https://www.abc.org/News-Media/News-Releases/abc-2024-constr...

                  • exceptione a day ago

                    On a macro scale, that has hardly any impact, and I think it would be even immeasurable.

                    It is rather the other way around. Higher rents / house prices will make sure only people with higher wages can afford to live there. That means your bagel or coffee will be more expensive there too.

                    • pj_mukh a day ago

                      I didn’t say macro scale, I said Los Angeles that’s the problem.

                      Pretty much everything required to build housing, wood, labor, pre-approved land is in a massive shortage that we can’t spend money to fix.

                      So more money to simple pump demand for all those things will have a massive inflationary impact.

                      • exceptione 21 hours ago

                        Nope, LA is too much part of the macro economy to make such an impact. Wood and labor doesn't have to come from LA and even if that would double (it won't) there would be a round zero impact on inflation in LA. The land to be build was going to be sold anyways, you just get one bidder more, or several bidders less if the council makes requirements like x% social housing.

                        Please, forget anything you are worrying about here, it does not apply.

                        • pj_mukh 15 hours ago

                          Literally every problem I mentioned is at its worst state possible. People with millions and billions simply waiting to buy materials or get land approvals. It’s a well know intractable problem [1] and really the crux of the problem.

                          If just these problems could be solved the state has more than enough funds to house everyone. What billionaires do would be wholly irrelevant (like it is now)

                          [1] https://www.constructiondive.com/news/construction-materials...

              • latexr a day ago

                I didn’t say “do these things inefficiently”. If we know better, we can do better. It’s like if I said “use the money to fix the potholes in this road” and you replied “but if you shove all that asphalt in the same hole, it will create a mound that stops cars from going through”. Yeah, don’t dump everything in the same place without thinking.

                Start by collaborating with organisations which are entrenched in studying these issues and the impact of the solutions. If you have the money you can pay them to help and guide the effort, don’t act like if you know everything.

                • pj_mukh a day ago

                  Yes this has basically been the modus operandi of the gates foundation and it took them 10 years to make a dent on Malaria. They still have no clue how to “efficiently” reduce famines.

                  They won’t touch American housing problems with a 10ft pole. That should tell you something.

                  Go to Berkeley, tell them a Billionaire wants to build housing for the homeless in their neighborhood. See what happens.

                  It’s a hard pill to swallow but the best thing billionaires can do is let us tax them and then butt out go fly rockets. The political problems is upto the rest of us.

                  • exceptione 21 hours ago

                    The housing problem in the USA is mostly a NIMBY. It is difficult to get projects from the ground.

                  • latexr 19 hours ago

                    > They won’t touch American housing problems with a 10ft pole.

                    Why do you keep insisting on the USA? It’s not the only country in the world.

                    > Go to Berkeley

                    I will not. I’m not American.

                    > It’s a hard pill to swallow but the best thing billionaires can do is let us tax them

                    Maybe it’s a hard pill to swallow for the billionaire, but I personally agree and think you’re right. However, this conversation started with someone asking “what do you do with a a lot of money past a point” and offering only a private space agency as an alternative to working on AGI. My point was there are many other problems worth pursuing.

                    • pj_mukh 15 hours ago

                      My point was every other problem would be made worse by a billionaire pushing his/her money in there. Everyone is a couple of billions of money funneled away from becoming the next George Soros.

                      If you don’t think NIMBYism and degrowth is a problem in your country yet, just give it a couple of years. It just hit England, you’re next. No billionaire can save you.

          • talldayo 2 days ago

            To be honest, I would take a private space agency 7 days out of the week with that kind of capital. We have no fundamental proof that LLMs will scale to the intelligence levels that we imagine in our heads. The industry application for LLMs is even weaker than computer vision, and the public sentiment is almost completely against it. Sam's product is hype; eventually people are going to realize that Q* and Strawberry were marketing moves intended to extend OpenAI's news cycle relevancy and not serious steps towards superintelligence. We were promised tools, and they're shipping toys.

            I could tell you in very plain terms how a competitor to Boeing and SpaceX would benefit the American economy. I have not even the faintest fucking clue what "AGI" even is, or how it's profitable if it resembles the LLMs that OpenAI is selling today.

            • pj_mukh 2 days ago

              I would agree with you that a space agency is also useful (maybe more useful some days of the week). Sam disagrees and thinks he can do better without a non-profit board now. I'm glad we live in a world where he gets to try and we get to tax him and his employees to do other things we consider useful.

            • hackernewds 2 days ago

              This comment reeks of Steve Ballmer opinion of Apple and the Internet early days. If you work at any decent technology company, you realize AI applications every where and the pending mass layoffs. Or nimbler startups replicating their work more efficiently.

              • k__ 2 days ago

                Fair.

                On the other hand, just because the execs who do the layoffs bought into the narrative it doesn't mean they're right.

              • talldayo a day ago

                This comment reeks of Tim Cook's opinion of OpenAI in the late days of Apple's inability to create anything innovative in-house.

            • cdchn 2 days ago

              Private space agency and LLMs both seem like big industries going nowhere driven by sci-fi hopes and dreams.

              • blendergeek 2 days ago

                Its interesting how first impressions can be so deceiving. The world's largest private space agency (SpaceX) has completely changed the game in rural internet connectivity. Once upon a time, large chunks of the US had no reliable high speed internet. SpaceX has brought high-speed low-latency internet to every corner of the globe, even the middle of the ocean and Antarctica. This isn't going nowhere even if it seems that way.

              • sfblah 2 days ago

                Not sure I agree with you here. I use LLMs all the time for work. I've never once used a space agency for anything.

                • macintux 2 days ago

                  GPS, weather forecasting, tv broadcasting…I’ve been using a space agency for as long as I’ve been alive.

                • blendergeek 2 days ago

                  My Dad uses SpaceX to work from home every day.

                  • starspangled 2 days ago

                    SpaceX is not a private space agency though, it is a private space launch and satellite communications company, which has revolutionized access to space and access to communication, providing enormous social benefit.

                    People use SpaceX every day even if they never connected to a starlink -- the lower costs that governments pay for space launches means more money for other things, not to mention no longer paying Russia for launches or engines.

                • cdchn 2 days ago

                  I think they're both overhyped by sci-fi optimism but I would agree (even being mostly an AI minimalist) the impact of LLMs (and their improvement velocity) is a lot meaningful to me right now. I mean satellites are cool and all.

          • KSteffensen a day ago

            Cure cancer? Solve this climate change thing?

          • vasco 2 days ago

            Kid Rock did it first, but a golden toilet would be my answer.

        • ninepoints 2 days ago

          Anyone who had any respect for Sam "Give me your eyeball data" Altman was always delusional.

        • grahamj 2 days ago

          Sam: That was child's play for me

        • downrightmike 2 days ago

          And that is why SkyNet decided immediately to destroy everyone.

      • huevosabio 2 days ago

        I don't think the narrative makes sense. It was clear from way back in 2016 that training would take a ton of resources. Researchers were already been sucked into FAANG labs because they had the data, the compute, and the money. There was never a viable way for a true non-profit to make world-changing, deep learning-based AI models.

        When seen through the rearview mirror, the whole narrative screams of self-importance and duplicity. GPT-2 was too dangerous, and only they were trust-worthy enough to possess. They were trust-worthy because this was a non-profit, so "interest aligned with humanity". This charade has continued even to barely some months ago.

      • grey-area 2 days ago

        He didn’t build it.

        • eberfreitas a day ago

          Can we please, move this comment to the top?

      • namaria a day ago

        Occam's razor has never meant "let's take discourse at face value".

        That's not least complexity. That's least effort.

  • halJordan 2 days ago

    It isnt a tax thing or a money thing, its a control and governance thing.

    The board of the non-profit fired Altman and then Altman (& MS) rebelled, retook control, & gutted the non-profit board. Then, they stacked the new non-profit board with Altman/MS loyalists and now they're discharging the non-profit.

    It's entirely about control. The board has a legally enforceable duty to its charter. That charter is the problem Altman is solving.

    • knome 2 days ago

      >That charter is the problem Altman is solving

      We worry about the existential threat of AI becoming a paperclip factory while many humans are already.

    • teleforce 2 days ago

      I wished I have 10 upvote to give you, bravo excellent observations and conclusions.

  • burnte 2 days ago

    The problem is that OpenAI calls itself OpenAI when it's completely sealed off, and calls itself a non-profit when, as you say, almost everything about is for profit. Basically they're whitewashing their image as an organization with noble goals when it's simply yet another profit motivated company. It's fine if that's what they are and want to be, but the lies are bothersome.

  • joe_the_user 2 days ago

    There's a now-quintessential HN post format, "Poster criticizing X don't seem to under [spray of random details about X that don't refute the criticism - just cast posts as ignorant]".

    In this case, Mozilla as a non-profit owning a for-profit manages to more or less fulfill the non-profit's mission (maintaining an open, alternative browser). OpenAI has been in a hurry to abandon it's non-profit mission for a while and the complex details of its structure doesn't change this.

  • seizethecheese 2 days ago

    “Decoupling” is such a strange euphemism for removing an asset worth north of $100b from a nonprofit.

    • throwup238 2 days ago

      OpenAI Global LLC is the $100b asset. It’s not being removed, the nonprofit will still own all the shares it owns now until it decides to sell.

      • sangnoir 2 days ago

        The shares will be diluted - the LLC used to be 100% owned by the non-profit; and now there's no bottom.

        • Aeolun 2 days ago

          Normally shareholders aren’t ok with that.

          • b800h 2 days ago

            I was under the impression that in UK law at least, (and obviously not in this case) the trustees of a non-profit would be bound to work in the best interests of that non-profit. And so allowing an asset like this to somehow slip out of their control would be the sort of negligence that would land you in very hot water. I'd be interested to know how this isn't the case here.

            • upwardbound 2 days ago

              I think it is the case here, and I hope Elon Musk persists in his lawsuits about this. As a large donor to the nonprofit in its early days he’s one of the people with the strongest standing to sue / strongest claim for damages.

              Obviously Elon is mostly doing this suit as a way to benefit Grok AI but honestly I don’t mind that; competitors are supposed to keep each other in check, and this is a good and proper way for companies to provide checks & balances to each others’ power and it’s one reason why monopolies are bad is the absence of competitor-enforced accountability.

              Lawsuit: https://www.reuters.com/technology/elon-musk-revives-lawsuit...

                  https://www.reuters.com/technology/elon-musk-revives-lawsuit-against-sam-altman-openai-nyt-reports-2024-08-05/
            • stale2002 2 days ago

              > somehow slip out of their control would be the sort of negligence that would land you in very hot water.

              > how this isn't the case here.

              Its not the case because they are doing the opposite of what you are suggesting. They are increasing the value of the asset that they own.

              Sure, the asset itself is being diluted, but the individual parts that it owns are more valuable.

              It is perfectly reasonable for a non profit to prefer to own 30% of a 100 billion dollar asset, lets say, compared to 100% of a 10 billion dollar asset.

              • mlsu a day ago

                Isn't the goal of a non-profit by its very definition... not profit?

                The goal of the openAI non-profit is something something control the development of AI for the good of all humanity, then it seems that they explicitly shouldn't care about making $20 billion, and explicitly should care about maintaining control of openAI.

                If you listen to their rhetoric, $20 billion is peanuts compared to the lightcone and the kardashev scale and whatever else.

  • nfw2 2 days ago

    > "All OpenAI is doing here is decoupling ownership of the for-profit entity from the nonprofit."

    Yes, but going from being controlled by a nonprofit to being controlled by a typical board of shareholders seems like a pretty big change to me.

  • mr_toad 2 days ago

    > All OpenAI is doing here is decoupling ownership of the for-profit

    All? As far as I know this is unprecedented.

    • A1kmm 2 days ago

      Maybe at this scale.

      But unfortunately charities and not-for-profits putting their core business into a company, and then eventually selling it off is not unprecedented. For example, The Raspberry Pi Foundation was a not-for-profit organisation around Raspberry Pi. They formed a LLC for their commercial operations, then gradually sold it off before eventually announcing an IPO: https://www.raspberrypi.org/blog/what-would-an-ipo-mean-for-....

      I think it is terrible that not-for-profits are just being used as incubators for companies that eventually take the core mission and stop primarily serving the public interest.

      There are of course other examples of charities or not-for-profits that put part of their core operations in a company and don't sell out, instead retaining 100% ownership - for example Mozilla. However, I think there should be some better way for impactful not-for-profits to have some revenue generating aspects in line with their mission (offset by allowing temporary surplus to cover future expenses, or by other expenses).

      • throwup238 2 days ago

        > Maybe at this scale.

        I don't think it's unprecedented, even at this scale.

        Novo Nordisk, the pharmaceutical company behind Semaglutide (aka Ozempic) with a market cap >$600 bilion, was founded by the Novo Nordisk Foundation before going public. The latter now has an endowment of over $150 billion and owns a significant fraction of the public company.

  • bbor 2 days ago

    Good questions!

    Right now, OpenAI, Inc. (California non-profit, lets say the charity) is the sole controlling shareholder of OpenAI Global LLC (Delaware for-profit, lets say the company). So, just to start off with the big picture: the whole enterprise was ultimately under the sole control of the non-profit board, who in turn was obligated to operate in furtherance of "charitable public benefit". This is what the linked article means by "significant governance changes happening behind the scenes," which should hopefully convince you that I'm not making this part up.

    To get really specific, this change would mean that they'd no longer be obligated to comply with these CA laws:

    https://leginfo.legislature.ca.gov/faces/codes_displayText.x...

    https://oag.ca.gov/system/files/media/registration-reporting...

    And, a little less importantly, comply with the guidelines for "Public Charities" covered by federal code 501(c)(3) (https://www.law.cornell.edu/uscode/text/26/501) covered by this set of articles: https://www.irs.gov/charities-non-profits/charitable-organiz... . The important bits are:

      The term charitable is used in its generally accepted legal sense and includes relief of the poor, the distressed, or the underprivileged; advancement of religion; advancement of education or science; erecting or maintaining public buildings, monuments, or works; lessening the burdens of government; lessening neighborhood tensions; eliminating prejudice and discrimination; defending human and civil rights secured by law; and combating community deterioration and juvenile delinquency.
      ... The organization must not be organized or operated for the benefit of private interests, and no part of a section 501(c)(3) organization's net earnings may inure to the benefit of any private shareholder or individual.
    
    I'm personally dubious about the specific claims you made about revenue, but that's hard to find info on, and not the core issue. The core issue was that they were obligated (not just, like, promising) to direct all of their actions towards the public good, and they're abandoning that to instead profit a few shareholders, taking the fruit of their financial and social status with them. They've been making some money for some investors (or losses...), but the non-profit was, legally speaking, only allowed to permit that as a means to an end.

    Naturally, this makes it very hard to explain how the nonprofit could give up basically all of its control without breaking its obligations.

    All the above covers "why does it feel unfair for a non-profit entity to gift its assets to a for-profit", but I'll briefly cover the more specific issue of "why does it feel unfair for OpenAI in particular to abandon their founding mission". The answer is simple: they explicitly warned us that for-profit pursuit of AGI is dangerous, potentially leading to catastrophic tragedies involving unrelated members of the global public. We're talking "mass casualty event"-level stuff here, and it's really troubling to see the exact same organization change their mind now that they're in a dominant position. Here's the relevant quotes from their founding documents:

      OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact... 
      It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly. Because of AI’s surprising history, it’s hard to predict when human-level AI might come within reach. When it does, it’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.
    
    From their 2015 founding post: https://openai.com/index/introducing-openai/

      We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power. Our primary fiduciary duty is to humanity...
      We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”
    
    From their 2018 charter: https://web.archive.org/web/20230714043611/https://openai.co...

    Sorry for the long reply, and I appreciate the polite + well-researched question! As you can probably guess, this move makes me a little offended and very anxious. For more, look at the posts from the leaders who quit in protest yesterday, namely their CTO.

    • throwup238 2 days ago

      > I'm personally dubious about the specific claims you made about revenue, but that's hard to find info on, and not the core issue. The core issue was that they were obligated (not just, like, promising) to direct all of their actions towards the public good, and they're abandoning that to instead profit a few shareholders, taking the fruit of their financial and social status with them. They've been making some money for some investors (or losses...), but the non-profit was, legally speaking, only allowed to permit that as a means to an end.

      Look at your OpenAI invoices. They're paid to OpenAI LLC, not OpenAI Inc. I can't find confirmation on openai.com what the exact relationship between OpenAI Global LLC and OpenAI LLC is but the former is on their "Our Structure" page and the latter is in their data processing addendum so it's probably the subsidiary in charge of operating the services while Global does training and licenses it downstream. OpenAI Global was the one that made that big $10 billion deal with Microsoft

      That obligation is why they had to spin off a for-profit corporation. Courts are very strict in their interpretation of what "unrelated business income" is and the for-profit LLC protects the non-profit's tax exempt status.

      > "why does it feel unfair for a non-profit entity to gift its assets to a for-profit"

      What assets were gifted, exactly? They created the for-profit shortly after GPT2 (in 2019) and as far as I can tell that's the organization that has developed the IP that's actually making money now.

      I honestly don't understand how this isn't in the interest of the nonprofit's mission. It's currently a useless appendage and will never have any real power or resources until either OpenAI is in the black and sending profit up to it, or they can sell OpenAI shares. I don't think the charity has any more claim over GPT4 than Google does, having invented transformers.

      If this next round of funding goes through at $100-150 billion valuation, OpenAI Inc will probably be (on paper at least) the second wealthiest charity on the planet after the Novo Nordisk Foundation. This restructuring opens the way for the nonprofit to sell its shares and it's going to be a hell of a lot of money to dedicate towards their mission - instead of watching its subsidiary burn billions of dollars with no end in sight.

      • bbor 2 days ago

        Thanks for another polite response! I think this is the fundamental misunderstanding:

           What assets were gifted, exactly? ...It's currently a useless appendage and will never have any real power or resources until either OpenAI is in the black and sending profit up to it, or they can sell OpenAI shares.
        
        The charity will gift its control of a $100B company for some undisclosed "minority stake" (via a complex share dilution scheme), in exchange for nothing other than "the people we're gifting it to have promised to do good with it". It's really that simple. The charity never was intended to draw profit from the for-profit, and the "never have any real power" contention is completely inaccurate -- they have direct, sole control over the whole enterprise.

          This restructuring opens the way for the nonprofit to sell its shares and it's going to be a hell of a lot of money to dedicate towards their mission
        
        Even putting aside the core issue above (they won't have many shares to sell), the second part of my comment comes back here: what would they buy with all that money? Anthropic? Their explicit mission is to beat for-profit firms in the race to AGI so convincingly that an arms race is avoided. How could they possibly accomplish this after gifting/selling away control of the most capable AI system on the planet?

        Finally, one tiny side point:

          Courts are very strict in their interpretation of what "unrelated business income" is and the for-profit LLC protects the non-profit's tax exempt status.
        
        I'm guessing you're drawing on much more direct experience than I am and I don't question that, but this seems like a deceptive framing. Normal charities have no issues with unrelated business income, because they don't run "trades or businesses", they just spend their money. I know that selling ChatGPT subscriptions is a large income source, but it's far from the only way to pursue their mission -- and is wildly insufficient, anyway. They in no way were forced to do it to meet their obligations.

        Again, I'm a noob, so I'll cite the IRS-for-dummies page on the topic for onlookers: https://www.irs.gov/charities-non-profits/unrelated-business...

        • throwup238 2 days ago

          > The charity will gift its control of a $100B company for some undisclosed "minority stake" (via a complex share dilution scheme), in exchange for nothing other than "the people we're gifting it to have promised to do good with it". It's really that simple. The charity never was intended to draw profit from the for-profit.

          That's not my interpretation. It sounds like the restructuring is part of the deal that values OpenAI at $100-150 billion [1]. They're not "gifting" away anything any more than a Series A investor gifts something to a Series B investor. They're restructuring so that their stake will be worth more afterwards than it is now, regardless of the percentages. That's what every company owner goes through when the company raise a VC round, goes public, or even just offers employees stock options. That doesn't change because the owner is a nonprofit and it sounds like until they restructure, the for-profit is worth nowhere near that crazy 12 figure number.

          "Minority stake" just means that they won't have enough to control the corp out right with 50%+1 which is probably what everyone wants to justify the investment. Reuters TFA says "The plan is still being hashed out with lawyers and shareholders and the timeline for completing the restructuring remains uncertain" so we don't really know what the post valuation numbers look like or who is getting what. We also don't know how the voting vs non-voting shares will split. Losing majority control after multiple multi-billion dollar rounds is the norm so if Microsoft's previous $10 bil investment converts and 10-20 pts go to the employee pool, it's a perfectly fair deal.

          > Even putting aside the core issue above (they won't have many shares to sell), the second part of my comment comes back here: what would they buy with all that money? Anthropic? Their explicit mission is to beat for-profit firms in the race to AGI so convincingly that an arms race is avoided. How could they possibly accomplish this after selling control of the most capable AI system on the planet?

          As far as I can tell, without this deal OpenAI LLC goes bankrupt under the rumored $5b/yr losses and the charity loses all relevance when the bankruptcy court fire sales the IP. With this deal, it can create a secondary market and use the funds to focus on its actual mission. The only way that his deal doesn't make sense (in my mind) is if you believe that GPT4/o1/whatever are the keys to fulfilling OpenAI's mission and it becomes impossible if it loses control. Personally I find that very hard to believe, which might be the actual disconnect we're having here.

          What would they buy? They'd hire people to do research under their umbrella instead of a for-profit one, fund compute infrastructure for researchers who don't have billions for H100s, give grants to organizations, or spin off more startups. Even if it's a 20% stake of a $100 billion, that's enough for a ivy league sized endowment that can fund AI research for generations. If it's a 49% stake of $150 billion, that makes it the second wealthiest charity after Novo Nordisk Foundation - which continues to do tons of biomedical research even though it doesn't have total control over the public Novo Nordisk or the semaglutide IP.

          OpenAI would become one of the largest grant giving organizations in the world overnight using just the interest from the endowment. Imagine the equivalent to 50-100% of the NSF's annual grant budget going just to AI research!

          > Normal charities have no issues with unrelated business income, because they don't run "trades or businesses", they just spend their money.

          See my other reply [2] for a sample of some nonprofits that have for-profit arms. A significant fraction of them do, especially those that offer some sort of product or service (I don't have hard stats but I'd venture it's the majority of the major charities that do the latter). "Normal" charities that just disburse grant money and spends donations is only one type among many.

          [1] https://www.reuters.com/technology/artificial-intelligence/o...

          [2] https://news.ycombinator.com/item?id=41661063

          • bbor 2 days ago

            Hmm, fair points, especially on there not being a strict norm for charities. Thanks for taking more time to clarify.

            I’d say we disagree about the following somewhat indeterminate points:

            1. Whether OpenAI has/could-have-had staying power without raising immense amounts of venture capital. I will readily admit that they’ve gone so far down this road that they are now somewhat trapped by their massive investments and contracts, not to mention losing almost all their top researchers.

            2. Whether the people in charge of this deal can be trusted to propose a fair outcome for the charity other than “their mission ineffably lives on in us” (and as a corollary, whether a fair outcome is likely).

            3. Whether the private technical assets of OpenAI (GPT, DALLE, and Sora) are meaningfully unique in their potential for impact, knowing what the public knows in the current moment — which I will admit is far from the complete competitive picture.

            4. Whether OpenAI’s mission could be meaningfully achieved by passing out grants to a diverse body of scientists.

            I’d be happy to “debate” (lol) any of those particulars if you want, but I think it’s otherwise best to leave it at “we assess the known facts differently”. If we let some time pass, that’ll at least settle question 2…

            • throwup238 2 days ago

              > 2. Whether the people in charge of this deal can be trusted to propose a fair outcome for the charity other than “their mission ineffably lives on in us” (and as a corollary, whether a fair outcome is likely).

              Since Sam Altman is on the board of OpenAI Inc, I expect this deal will be under extreme scrutiny for self dealing. He has flown under the radar so far by not taking any equity but this changes the second he does (IANAL). California, Delaware, and the feds will be looking closely at the deal.

              I don't think the danger for OpenAI the charity is as great as people make it out to be. They'll be able to do a lot more than hand out grants with an 11 figure endowment.

  • simantel 2 days ago

    > Almost every nonprofit that raises revenue outside of donations has to be structured more or less this way to pay taxes.

    I don't think that's true? A non-profit can sell products or services, it just can't pay out dividends.

    • throwup238 2 days ago

      If those products and services are unrelated business income, they have to pay taxes on it: https://www.irs.gov/charities-non-profits/unrelated-business...

      What counts as “related” to the charity’s mission is fuzzy but in practice the courts have been rather strict. They don’t have to form for-profit subsidiaries to pay those taxes but it helps to derisk the parent because potential penalties include loss of nonprofit status.

      For example, the nonprofit Metropolitan Museum of Modern Art has a for-profit subsidiary that operates the gift shop. National Geographic Society has National Geographic Partners which actually owns the TV channel and publishes the magazine. Harvard and Stanford have the Harvard Management Company and Stanford Management Company to manage their endowments respectively. The Smithsonian Institute has Smithsonian Enterprises. Mayo Clinic => Mayo Clinic Ventures. Even the state owned University of California regents have a bunch of for-profit subsidiaries.

  • hackernewds 2 days ago

    How is it possible to make tax free "donations" for profit making applications? You seem to imply there is nothing nefarious about the setup. Except the non-profit designation doesn't actually perform no social services, instead stand as a business structure to skirt taxation. Change my mind

    • throwup238 2 days ago

      > How is it possible to make tax free "donations" for profit making applications?

      The nonprofit invests the tax free donations into the for-profit. It gets to keep its equity just like any other investor and as long as that equity held by the non-profit, there's no taxable event. If the nonprofit sells its share - since it was closely involved in the creation and management of the for-profit as an active investment - it becomes a taxable event under the unrelated business income tax rules. Until that time, the only "profit making applications" like ChatGPT and the API are run by the for-profit, which - I repeat - pays its taxes.

      I genuinely don't understand why people think they've skirted taxation except out of sheer ignorance of how non-profits actually work. A 501(c)(3) is not some magic Monopoly "get-out-of-tax-free" card and the IRS isn't stupid, there's a ton of rules for tax exemption. They'd have a much easier time with tax avoidance if they were an actual for profit corporation with billions of dollars because GAAP rules are a lot more forgiving than non-profit regulations.

  • wubrr 2 days ago

    What leverage does Sam Altman have to get equity now? Does he personally have control over that decision?

kweingar 2 days ago

Can anybody explain how this actually works? What happens to all of the non-profit's assets? They can't just give it away for investors to own.

The non-profit could maybe sell its assets to investors, but then what would it do with the money?

I'm sure OpenAI has an explanation, but I really want to hear more details. In the most simple analysis of "non-profit becomes for-profit", there's really no way to square it other than non-profit assets (generated through donations) just being handed to somebody for private ownership.

  • lolinder 2 days ago

    If the assets were sold to the for profit at a fair price I could see this being legal (even if it shouldn't be). At least in that case the value generated by the non-profit tax free would stay locked up in non-profit land.

    The biggest problem with this is that there's basically no chance that the sale price of the non-profit assets is going to be $150 billion, which means that whatever the gap is between the valuation of the assets and the valuation of the company is pure profit derived from the gutting of the non-profit.

    If this is allowed, every startup founded from now on should rationally do the same thing. No taxes while growing, then convert to for profit right before you exit.

    • amluto 2 days ago

      It’s pretty great if you can manage to have the parent be 501(c)(3). Have all the early investors “donate” 90% of their investment to the 501(c)(3) and invest 10% in the for-profit subsidiary the old-fashioned way. They get a tax deduction, and the parent owns 90% of the subsidiary. Later on, if the business is successful, the parent cashes out at the lowest possible valuation they can pull off with a mostly straight face, and all the investors in the subsidiary end up owning their shares, pro rata, with no dilution from the parent. The parent keeps a bit of cash (and can use it for some other purpose).

      Of course the investors do end up owning their shares at a lower basis than they would otherwise, and they end up a bit diluted compared to a straightforward investment, but the investors seem likely to more than make up for this by donating appreciated securities to the 501(c)(3) and by deferring or even completely avoiding the capital gains tax on their for-profit shares.

      Obviously everyone needs to consult their lawyer about the probability of civil and/or criminal penalties.

    • mlinsey 2 days ago

      I haven't seen any details, but isn't this a pretty straightforward way of doing it? The non-profit has had majority ownership of the for-profit subsidiary since 2019. The already-for-profit subsidiary has owned all the ChatGPT IP, all the recent models, all the employee relationships, etc etc.

      The cleanest way for this to work is the for-profit to just sell more shares at the $150B valuation, diluting the non-profit entity below majority ownership. The for-profit board, which the non-profit could still probably have multiple seats on, would control the real asset, the non-profit would still exist and hold many tens of billions of value. It could further sell its shares in the non-profit and use the proceeds in a way consistent with its mission.

      They wouldn't even have to sell that much - I am pretty sure the mega-fundrasing rounds from Microsoft etc brought the non-profit's ownership to just north of 50% anyway.

      I don't see how this wouldn't be above board, it's how I assumed it was going to work. It would indeed mean that the entity that controls ChatGPT would now be answerable to shareholders, a majority of which would be profit seeking and a minority of which would be the non-profit with its mission, but non-profits are allowed to invest in for-profits and then sell those shares; all the calls for prosecutions etc seems just like an internet pitchfork mob to me.

      • jprete 2 days ago

        The non-profit would have to approve the scheme, and a rational non-profit would not, because it gives up any ability the non-profit has to fulfill its charter.

        • space_fountain 2 days ago

          Exactly, the question is this move in the non profits best interests? It's definitely in the best interest of the people running the non profit but I think many of the early donors wouldn't feel like this was what they were signing up for

      • space_fountain 2 days ago

        I think the problem is early employees and investors were convinced to invest their time and money into a non profit. They were told that one of the reasons they should donate/work there as opposed to Google was because they were a non profit focused on doing good. Now when it seems like that non profit is successful that all is being thrown out the window in service of a structure that will result in more profit for the people running the non profit

    • bdowling 2 days ago

      For-profit startups don’t pay taxes while growing either, because they aren’t making any profit during that phase.

      • authorfly 2 days ago

        Corporate tax is always only paid on profit and is usually a minor part of the tax draw for the government from corporations of all sizes.

        The vast majority of taxes paid in developed nations are employee taxes and whatever national+local sales taxes and health/pension equivalent taxes are (indirectly) levied (usually 60-80% of national income). Asset taxes are a bit different.

        It's true even in the bootstrapped company case: If you earn say $100k and keep $50k after all the employee indirect/direct taxes. Now imagine you spend $40k of that $50k in savings, setting up a business. You spend $30k on another employee, paying $15k of employer and employee taxes, and spend the other $10k on a company to do marketing (who will spend $5k of that on employees and pay $2.5k of tax), and you earn less than $40k in income, by the end of year 1 you have:

        1) A loss-making startup which nonetheless is further along then nothing

        2) Out of $100k of your original value, $67.5k has already reached the government within 12 months

        3) Your time doing the tech side was not compensated but could not (for obvious anti-fraud reasons) be counted as a loss and as you have noted, you don't pay tax when you make a loss, and you don't get any kind of negative rebate (except certain sales tax regimes or schemes).

        If you are in the US, the above is currently much worse due to the insane way R&D Software spend needs to be spread immediately as a tax burden.

        So it's really not fair to say a new startup isn't paying taxes. They almost always are. There are very few companies or startups that pay less than 50% of their income to staff, and almost all of those are the unicorns or exceptional monopoly/class leaders. Startups, and founders tend to disproportionately give more of their income and are essentially to that extent re-taxed.

        Even though you saved the money in order to start a startup, and paid your due employee taxes, you then have to pay employee taxes to use it, etc.

        • mpeg 2 days ago

          Is this a US thing? In the UK employee tax is the employee’s to pay, not the company. Even if the company technically transfers it directly to the tax agency it’s not really their money.

          EDIT: I guess we do have employer tax as national insurance contributions too, always forget about that since I’ve always paid myself under that threshold

          • authorfly 2 days ago

            I'm not sure if you mean whether the UK has the same low corporation vs high income/pension/NI contributions income? If so, yes.

            The UK does have employers NI contributions but that's not what I mean. The point is, if you spent a year to earn a gross £100k, and as you earn it, pay £50k of total tax, and with the remaining £40k/£50k you spend it on an employee at your company in salary and pay then £20k of tax, the government has that year earned £70k from that £100k passing through.

            You can argue that really "£140k" has passed through, but it's not the case, because you created a new job that wouldn't otherwise have existed had you instead saved that £40k for a house. Either way HMRC gets £70k this year rather than £50k.

            The wider point I was making is that all companies, even for-profit, pay tax to do just about anything, and companies with much lower sales than costs aren't just paying nothing. They generally have higher costs because they are paying people, and paying their taxes every month. The tax per employee is completely uncorrelated with the financial profit or thereof by the business, so it's a (sensible) misconception that companies that don't make profit like startups don't contribute to the economy. They do, by paying employment taxes.

            I'm really making the point that you have to account for employee taxes (both employer and employee as you mention) for your costs as a business. That means, even though you already paid those yourself when you carried out the work to gain savings to invest in your business (to spend on an employee), you have to pay again when paying your employee.

            I.e. Self-funded or businesses launched from previous accrued personal income where you invest your own time as well result in a bad tax situation;

            whereas an employee earning £100k might pay £50k tax total and save £50k for a house (no VAT),

            The alternate of investing that £50k in your business by paying someone £40k means you have to pay that employees PAYE, their Employer and Employee NI. So the government gets to re-tax most of that money when you use it to hire someone to build a new business with you, in a way they don't if you use it to buy a house, in terms of practical impact. When you pay yourself as an entrepreneur depends, there's dividends+PAYE in the UK (which requires yes you pay for both your employer and employee tax for yourself) or capital gains(ignoring tax schemes), either way, you do get taxed at some point to bring cash out.

            The government in other words massively benefits from unprofitable for-profit companies so long as they hire some people, especially if the companies are self-funded. But even if it is investment, it's better to have that money spent on salaries now in new companies than sitting as stock in larger companies that keep cash reserves or use schemes to avoid tax. They get much more tax from people starting even unprofitable new businesses, than from employees who simply save money.

            It's one of the reasons that since the introduction of income taxes (more or less WW1 in most countries!), you need money to get money in way that you fundamentally did not in the same way back when you could earn $50 from someone and directly use that same $50 to pay someone for the same skills without any loss of value.

            • vladms 2 days ago

              > So the government gets to re-tax most of that money when you use it to hire someone to build a new business with you.

              You should consider it also from the point of view of the employee. The government taxes your employee to offer him services, it does not care who hires him (you, that saved the money).

              Yes, it is true that you need lots of money to HIRE someone, but you can try to do a startup with a couple people that live from their savings for a while (so, not paying themselves a salary, but having shares) which avoids the tax situation as first.

              I think we are quite bad to assess how was life around 1900 in terms of infrastructure (in any country) - so yes, probably people paid less taxes but lived in much worse overall conditions.

              • authorfly 2 days ago

                True, you can try and do a startup without hiring anyone, but how many companies with no paid employees succeed or bring in a net profit? You can do that until you need to hire someone, then you still hit the end of the road.

                Forget who the government is supposed to be taxing for what supposed purpose. The decision about asset and income law working differently (liabilities counting for one and all sources being able to intermingle over the financial year, financially speaking, for assets - with income always being payable within a month) is why these taxes work differently in practice then just one being for income, and one for personal asset (accrual). We could instead "tax an employee to offer his services" with a tax which allowed them to discount the liability of savings spent in businesses from their due tax from other sources, or we could charge higher capital gains than personal taxes.

                If you earned the original income from renting out properties or capital gains however and then invested it, you can write it off as a loss for your overall individual capital gains, pay $0 for all your rental/share increase in value, and only pay for the startups employee, with no tax on your original income as a result that tax year.

                If you have asset wealth, you don't get taxed twice like this as you can write it off. If you have income based savings wealth, you always get taxed and can't count it against investments you make.

                1900 is obviously different but income taxes help people with assets retain them for the reasons mentioned above. If Assets were taxed at a higher rate and you could not personally include liabilities in your capital gains (as with income), then it would be the opposite scenario.

                We say capital gains tax is all about wealth, but it's not: The US has no wealth tax. The capital gains tax is just lower tax on unearned income and the ability to intermingle that income. It's all income at the end of the day - just one, income from work, the government taxes heavily, the other, the government taxes less heavily, but most people never significantly earn that income.

      • xxpor 2 days ago

        If most of your expenses are software devs, that's not true any more.

        • ttul 2 days ago

          This is one reason why some companies have located engineers in Canada under subsidiaries. Canada not only allows you to deduct R&D costs as an expense, but there is an extremely generous R&D tax credit that yields a negative tax rate on engineers. For Canadian controlled private companies, this represents as much as a 60% refundable tax credit on R&D salaries. For foreign-owned companies, the benefit is smaller but still significant.

          The Trump tax policy was a bizarre move for a country that relies so heavily on homegrown innovation. But then again, so was the entire Trump presidency.

          • ckcheng a day ago

            Wait, you saying in Canada a R&D software company can essentially sell a dollar (of SDE produced goods) for a dollar and get a tax refund from the government?

        • perfmode 2 days ago

          How so?

          • flutas 2 days ago

            In short, section 174[0].

            It pushed almost all SWE jobs to be classified as R&D jobs, which changed how taxes are calculated on companies.

            They have an example at [0], but I'll copy it here. For a $1mm income, $1mm cost of SW dev, with $0 profit previously you paid $0 in tax (your income was offset by your R&D costs). Now it would be about $200k in taxes for 5 years, as you can't claim all of the $1mm that year anymore.

            [0]: https://blog.pragmaticengineer.com/section-174/

          • the_gorilla 2 days ago

            There's tons of taxes on hiring employees that you have to pay even if you're losing money. Payroll taxes, mandatory insurance taxes, unemployment taxes, probably more I just don't remember off the top of my head.

          • nickspag 2 days ago

            In an effort to lower the deficit effects of the Trump tax cuts (i.e. increase revenue so they could cut further in other areas), they reclassified software developers salary so that their salaries have to be amortized over multiple years, instead of just a business expense in that year. This is usually done for assets as those things have an intrinsic value that could be sold.

            In this case, business have to pay taxes on "profit" that they don't have as it immediately went to salaries. There were a lot of small business that were hit extremely hard.

            They tried to fix it in the recent tax bill but it was killed in the Senate last I checked. You can see more here: https://www.finance.senate.gov/chairmans-news/fact-sheet-on-....

            Also, software developers in Oil and Gas industries are exempt from this :)

      • IncreasePosts 2 days ago

        Sure. But there are a lot of other tax advantages. For example, at least where I am, non profits don't pay sales tax on purchases, and don't have to pay into unemployment funds. I'm sure there is more, but I'm not super familiar with this world.

        • caeril 2 days ago

          Corporations don't generally pay sales tax either, if the bean counters can justify the purchase as COGS. There are plenty of accountants who can play fast and loose with what constitutes COGS.

          • sethaurus 2 days ago

            For anyone else unfamiliar with this initialism:

            > Cost of goods sold (COGS) refers to the direct costs of producing the goods sold by a company. This amount includes the cost of the materials and labor directly used to create the good. It excludes indirect expenses, such as distribution costs and sales force costs.

      • daveguy 2 days ago

        Good point. That sounds a lot like fraud.

        • svnt 2 days ago

          Not paying taxes while losing money sounds like fraud to you?

          What do you propose should be taxed, exactly?

          • Spivak 2 days ago

            Cash flow. Profit get's taxed at x%, cash flow that was offset with losses/expenses gets taxed at y% < x. Company that does $100Mil of business and makes no money is very different than company that does $10k of business and makes no money.

            • svnt 2 days ago

              Your equations do not account for the difference you mention, they only ensure growth will be slower and riskier.

              • Spivak 2 days ago

                That's fine, and in exchange we get significantly more tax revenue and close a gaping tax avoidance loophole. If taxing profit was a good proxy for business activity companies would use it when making their pricing tiers. But they don't. They use revenue and headcount because profit can and is gamed. I can't deduct my expenses on my own taxes and the world didn't end.

          • daveguy 2 days ago

            True, non-profits don't pay taxes on any revenue regardless of expense.

            How do you know they had no profit with all of the deals with major companies and having one of the most popular software services in existence? Non-profits can earn profit, they just don't have to pay taxes on those profits and they can't distribute those profits to stakeholders -- it goes back to the business.

            They are also a private company, and do not have to report revenue, expenses, or profits.

            So yeah, I stand by what I said -- it sounds like fraud. And it deserves an audit.

    • SkyPuncher 2 days ago

      You actually don't even need to sell them. Just sign an exclusive, non-revocable license agreement.

      Practically the same as selling, but technically not. Non-profit still gets to live up to it's original mission, on paper, but doesn't really do anything internally.

    • JumpCrisscross 2 days ago

      > there's basically no chance that the sale price of the non-profit assets is going to be $150 billion

      The non-profit’s asset is the value of OpenAI minus the value of its profit-participation units, i.e. the value of the option above the profit cap. Thus, it must be less than the value of OpenAI. The non-profit owns an option, not OpenAI.

    • benreesman 2 days ago

      “You don't get rich writing science fiction. If you want to get rich, you start a religion.”

      ― L. Ron Hubbard

    • TZubiri 2 days ago

      But what is the non-profit going to do with all that money is the question.

  • tomp 2 days ago

    exactly.

    If that's how it works, why wouldn't you start every startup as a non-profit?

    Investment is tax deductible, no tax on profits...

    Then turn it into a for-profit if/when it becomes successful!

    • jameshart 2 days ago

      Donations are not investments. They don’t result in ownership.

  • SkyPuncher 2 days ago

    I've actually worked through a similar situation for a prior startup. We were initially funded by a large, hospital system (non-profit) who wanted to foster innovation and a startup mentality. After getting started, it became clear that it was effectively impossible for us to operate like a startup under a non-profit. Namely, traditional funding routes were neigh impossible and the hospital didn't want direct ownership.

    It's been many years, but the plan was essentially this:

    * The original, non-profit would still exist

    * A new, for-profit venture would be created, with the hospital having a board seat and 5% ownership. Can't remember the exact reason behind 5%. I think it was a threshold for certain things becoming a liability for the hospital as they'd be considered "active" owners above 5%. I think this was a healthcare specific issue and unlikely to affect non-profits in other fields.

    * The for-profit venture would seek, traditional VC funding. Though, the target investors were primarily in the healthcare space.

    * As part of funding, the non-profit would grant exclusive, irrevocable rights of it's IP to that for-profit venture.

    * Everyone working for the "startup" would need to sign a new employment contract with the for-profit.

    * Viola! You've converted a non-profit into a for-profit business.

    I'm fuzzy on a lot of details, but that was the high level architecture of the setup. It's one of those things where the lawyers earn a BOAT LOAD of money to make sure every technicality is accounted for, but everything is just a technicality. The practical outcome is you've converted a non-profit to a for-profit business.

    Obviously, this can't happen without the non-profit's approval. From the outside, it seems that Sam has been working internally to align leadership and the board with this outcome.

    -----

    What will be interesting is how the employees are treated. These types of maneuvers are often an opportunity for companies to drop employees, renegotiate more favorable terms, and reset vesting schedules.

    • feoren 2 days ago

      > * As part of funding, the non-profit would grant exclusive, irrevocable rights of it's IP to that for-profit venture.

      This is the part that should land people literally in jail. A non-profit should not be able to donate its assets to a for-profit, and if it's the same people running both companies, those people must be sent to prison for tax evasion. There is no other way to preserve the integrity of the "non-profit" status with this giant loophole.

    • VirusNewbie 19 hours ago

      > As part of funding, the non-profit would grant exclusive, irrevocable rights of it's IP to that for-profit venture.

      Isn't that fraud/stealing from all the donors? I mean how is that different than just giving money to another business not owned by the non profit?

  • n2d4 2 days ago

    After the non-profit sells its assets, it would either donate the proceeds in a way that would be aligned with the original mission, or continue to exist as a bag of cash, basically.

    • kweingar 2 days ago

      It seems incredibly convenient that a non-profit's leaders can say "I want equity in a for-profit company, so we will sell our assets to investors (who will hire me) and pass off the proceeds to some other non-profit org run by some other schmuck. This is in the public interest."

      • n2d4 2 days ago

        State regulators have to sign off on the deal; it's not sufficient for the non-profit board to agree to it.

  • winternett 2 days ago

    >Can anybody explain how this actually works?

    Every answer moving forward now will contain embedded ads for Sephora, or something completely unrelated to your prompt...

    That money will go into the pockets of a small group of people that claim they own shares in the company... Then the company will pull more people in who invest in it, and they'll all get profits based on continually rising monthly membership fees, for an app that stole content from social media posts and historical documents others have written without issuing credit nor compensating them.

  • blackeyeblitzar 2 days ago

    Maybe it’s a hint that the tax rate for small and medium companies should be reduced (or other non tax laws modified based on company size), to copy the advantages of this nonprofit to profit conversion, while taxes for large companies should be increased. It would maybe help make competition more fair and make survival easier for startups.

    • sophacles 2 days ago

      This is actually a good idea. I say we go even further and stop wasting so much money cleaning up after companies - get rid of the entire legal entity known as a corporation and let investors shoulder the full liability that comes with their ownership stake.

      • BrawnyBadger53 2 days ago

        History has shown that limited liability is a massive advantage for our economy in encouraging both domestic and foreign investment. Seems unlikely we would put ourselves at a global disadvantage by doing this.

        • sophacles 2 days ago

          History has also shown that limited liability ends up costing me an awful lot of tax money to cover for some twat getting paid out (at a lower tax rate) with no consequences for their actions. Adding liability would certainly lower my taxes, and have a fantastic chilling effect on the type of trash that harm innocent bystanders with their reckless disregard for consequences in the name of chasing a dollar.

  • jdavdc 2 days ago

    My expertise is in NFP hospitals. Generally, when they convert for for-profit part of that deal is the creation of a foundation funded with assets that are ostensibly to advance the original not for profit mission.

  • baking 2 days ago

    The nonprofit gives all its ownership rights to the for-profit in return for equity. The nonprofit is free to hold the equity and maintain control or sell the equity and use the proceeds for actual charitable purposes.

    As long as the money doesn't go into someone's pocket, it's all good (except that Sam Altman is also getting equity but I assume they found a way to justify that.)

    OpenAI will eventually be forced to convert from a public charity to a private foundation and will be forced to give away a certain percentage of their assets every year so this solves that problem also.

    • jprete 2 days ago

      The significant asset isn't equity, it's control. 51% is much more valuable than 49% when the owned organization is supposedly working towards technology that will completely change how the world works.

HarHarVeryFunny 2 days ago

And more high level exits ... not only Mira Murati, but also Bob McGrew , and Barret Zoph

https://www.businessinsider.com/sam-altman-openai-note-more-...

  • nikcub 2 days ago

    Difficult to see how these two stories aren't related.

    OpenAI has been one of the most insane business stories in years. I can't wait to read a full book about it that isn't written by either Walter Isaacson or Michael Lewis.

    • HarHarVeryFunny 2 days ago

      I've only read Michael Lewis's "Liars Poker" which I enjoyed, but perhaps that sort of treatment of OpenAI would make it into more of a drama (which also seems to be somewhat true) and gloss over what the key players were really thinking which is what would really be interesting.

    • edm0nd 2 days ago

      I want Chuck Palahniuk to write the book. It would be dark and amazing.

      • nikcub a day ago

        love chuck because the third act would be sam eating his own head while realising AGI was created to delete itself and taking away everything that is his

      • drmindle12358 a day ago

        Do you have appetite for a poem? @sama made it on my list of Silicon Valley villains [1] long time ago:

        "Villain staging the show / open, close / you can count on the con man to wow you / even though, the only trick he knows / is the “law” of scale / but let's just hope / The con man doesn't turn into evil / when the thing he has is real and powerful"

        [1] https://www.drmindle.com/ai-is-not-dangerous/#villains-in-th...

addedlovely 2 days ago

In that case, where can I apply for my licensing fee for my content they have scraped and trained on.

List of crawlers for those who now want to block: https://platform.openai.com/docs/bots

  • nikcub 2 days ago
    • username223 2 days ago

      Don't stop at robots.txt blocking. Look through your access logs, and you'll likely find a few IPs generating a huge amount of traffic. Look them up via "whois," then block the entire IP range if it seems like a bot host. There's no reason for cloud providers to browse my personal site, so if they host crawlers, they get blocked.

    • 7373737373 a day ago

      Thank you very much for mentioning these! These parasites deserve to burn in hell for their greed and violation of consent

  • cdchn 2 days ago

    I wonder how the AI/copyright arguments will play out in court.

    "If I read your book and I have a photographic memory and can recall any paragraph do I need to pay you a licensing fee?"

    "If I go through your library and count all the times that 'the' is adjacent to 'end' do I need to get your permission to then tell that number to other people?"

    • zelphirkalt a day ago

      Cases where we see the absurdity of copyright in its current form. But either we have it for everyone, including OpenAI, or for no one. Or are perhaps some more equal than others before the law?

zingerlio 2 days ago

We are booing Altman because his bait and switch feels unethical, but many of us saw it coming from a mile away, how does he make this transition to take advantage of the financial system so smoothly? Is there no legal guard for such maneuver, or is he just an insanely good player to circumvent all of them in plain view?

  • joe_the_user 19 hours ago

    Just cobbling together articles I've read... Once the board failed to fire Altman, this transition was a done-deal, it was just a matter of when. Before the board fired Altman, he had been engaging in a more covert effort to take control - putting together personal attacks on various board members intended to drive them out and hiring people with a similar attitude to and loyalty to himself. When the board fired him, they didn't give any clear reasons and that's most mysterious part of the situation. Apparently they were all concerned about his campaign to take control of the board but they each had slightly different reasons for agreeing. I'd further speculate that the board thought in lawyer-advice terms - say as little as possible to avoid a retaliatory lawsuit.

    But the board's lack of communication apparently allowed Altman to demonstrate he was more important to the organization than the formal/legal structure, 90% signed intents to quit and the board backed down. It seemed that Altman simply represented the attitude of the many Silicon Valley tech-people - once you have a chance of money, don't hold back, do everything you can to make it.

  • hackernewds 2 days ago

    It has been running on an honor code, that someone pulling off something so slimy as to funnel money meant for non-profits would just get shunned by society and business. Yet here we are.

    Ironically the one person with resources fighting it in a tangible way, even if for spite, is Elon Musk.

neilv 2 days ago

The incremental transformation from non-profit to for-profit... does anyone have legal standing to sue?

Early hires, who were lured there by the mission?

Donors?

People who were supposed to be served by the non-profit (everyone)?

Some government regulator?

  • bbor 2 days ago

    This is the most important question, IMO! ChatGPT says that employees and donors would have to show that they were defrauded (lied to), which IMO wouldn’t exactly be hard given the founding documents. But the real power falls to the government, both state (Delaware presumably…?) and federal. It mentions the IRS, but AFAIU the DoJ itself could easily bring litigation based on defrauding the government. Hell, maybe throw the SEC in there!

    In a normal situation, the primary people with standing to prevent such a move would be the board members of the non-profit, which makes sense. Luckily for Sam, the employees helped kick out all the dissenters a long time ago.

    • jjulius 2 days ago

      Genuinely curious because I have no idea how any of this works...

      Would the founding documents actually count as proof of a lie? I feel like the defense could easily make the argument that the documents accurately represented their intent at the time, but as time went on they found that it made more sense to change.

      It seems like, if the founding documents were to be proof of a lie, you'd have to have corresponding proof that the documents were intentionally written to mislead people.

      • bbor 2 days ago

        Great point, and based on my amateur understanding you’re absolutely correct. I was mostly speaking so confidently because these founding documents in particular define the company as being founded to prevent exactly this.

        You’re right that Altman is/will sell it as an unexpected but necessary adaptation to external circumstances, but that’s a hard sell. Potentially not to a court, sadly, but definitely in the public eye. For example:

          We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions… We are committed to providing public goods that help society navigate the path to AGI.   
        
        From 2018: https://web.archive.org/web/20230714043611/https://openai.co...

        And this is the very first paragraph of their founding blog post, from 2015:

          OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.
        
        https://openai.com/index/introducing-openai/
  • nialv7 a day ago

    Well I heard someone named Elon did try.

  • lenerdenator 2 days ago

    Everyone has legal standing to sue at any time for anything.

    Whether the case is any good is another matter.

    • ReaLNero 2 days ago

      This is not at all true, I recommend you look into the exact meaning of "legal standing".

    • moralestapia 2 days ago

      Yeah, so funny, *yawn*.

      Try to contribute to the conversation, though.

      What you say is also untrue, there's a minimum set of requirements that have to be met regarding discovery, etc.

    • blackeyeblitzar 2 days ago

      In the US standing is a specific legal concept about whether you have a valid reason/role to bring up a particular issue. For example most of Donald Trump’s lawsuits around the 2020 election were rejected for a lack of standing rather than on merit (whether the case is any good).

      • leeoniya 2 days ago

        is there a good source that shows which were dismissed as meritless vs ones dismissed due to lack of standing?

HeralFacker 2 days ago

Converting to a for-profit changes the tax status of donations. It also voids plausibility for Fair Use exemptions.

I can see large copyright holders lining up with takedowns demanding they revise their originating datasets since there will now be a clear-cut commercial use without license.

  • zmgsabst 2 days ago

    I hope I can join in, as a consumer, because there’s a difference between using the IP I contribute to conversations for a non-profit and a commercial enterprise.

    • codewench 2 days ago

      I suspect that if you have ever posted copyrightable material online, you will have valid cause to sue them, as they very obviously have incorporated your work for commercial gain. That said, I unfortunately put your chances of winning in court very low.

      • zelphirkalt a day ago

        And why is it, that winning chances are low? Why do the courts let big tech trample on our rights, but the small man goes to jail or has to pay fines for much much less? And how can this situation be improved?

        Hopefully at least in the EU someone will wake up and make better laws or even start applying them to the current situation.

  • shakna 2 days ago

    A non-profit entity will continue to exist. Likely for the reasons you stated.

    • bbor 2 days ago

      Any reasonable court would see right through “well we trained it for the public good, but only we can use it directly”. That’s not really a legal loophole as much as an arrogant ploy. IMO, IANAL

  • lewhoo 2 days ago

    > It also voids plausibility for Fair Use exemptions. I can see large copyright holders lining up with takedowns

    I thought so for a moment but then again Meta, Anthropic (I just checked and they have a "for profit and public benefit" status whatever that means), Google or that Musk's thing aren't non-profits, are they ? There are lawsuits in motion for sure but with how it stands today I think ai gets off the hook.

    • __loam 2 days ago

      Doesn't make it morally or ethically okay

      • lewhoo a day ago

        I'm with you there. I'm just not holding my breath for the current state of law.

fourseventy 2 days ago

So are they going to give elon equity? He donated millions to the non profit and now they are going to turn around and turn the company into a for-profit based on the work done with that capital.

  • wmf 2 days ago

    Elon has allegedly refused equity in OpenAI. He seems to want it to go back to its original mission (which isn't going to happen) or die (which isn't going to happen).

    • throwaway314155 2 days ago

      Sam Altman also allegedly had no interest in equity.

      • cdchn 2 days ago

        When you get to tell the ASI what to do, money has little value any more.

        • squidsoup 2 days ago

          Guess he's going to be waiting a long time.

        • BbzzbB 2 days ago

          Until then the $10.5B in equity might come in handy.

      • bmau5 2 days ago

        In the article it says he'll now receive equity

        • throwaway314155 2 days ago

          Indeed that's the point I'm making.

          • bmau5 2 days ago

            Ah sorry I misread your comment

  • LeafItAlone 2 days ago

    Given that Musk was already worried about this and has a legal team the size of a small army, one would expect that any conditions he wanted applied to the donation would have been made at the time.

  • kanbara a day ago

    he doesn’t need the money, for one. he missed out and didn’t control it, and now he’s jealous that it took off. oh well, world’s richest man and smallest violin.

ayakang31415 2 days ago

About a year ago (I believe), Sam Altman touted his mission to promote safe AI with claims that he has no equity in OpenAI and was never interested in getting any. Look where we are now, well played Sam.

  • upwardbound 2 days ago

    Does that amount to making a false forward-looking financial statement? (Specifically his claim that he wasn’t interested in getting equity in the future.)

    This claim he made was likely helpful in ensuring the OpenAI team’s willingness to bring him back after he was temporarily ousted by the board last year for alleged governance issues. (Basically: “don’t worry about me guys, I’m in this for the mission, not personal enrichment”)

    Since his claim likely helped him get re-hired, he can’t claim it was immaterial.

    I really hope someone from the SEC scrutinizes him someday. The Singularity is too important to let it be run by someone with questionable ethics.

    • jacobsimon 2 days ago

      My unprofessional take: The SEC is concerned primarily with protecting investors. If anything, changing to a normal for-profit structure and removing the cap on returns would be viewed as more investor/market-friendly than their current structure, which is partly to blame for what unfolded last year.

  • truculent 2 days ago

    > well played Sam

    Is it well played if you simply decide to lie brazenly? Anyone can win at monopoly if they decide to steal from the bank.

    • grogenaut 2 days ago

      I can confirm this is the only way I win at monopoly. My strategy for A$$hole is more devious and involves going from ass to president with a similar strategy to my monopoly strategy then making no one want to play again as president. As a great man, Charles Wopr, once said: the only winning move is not to play.

  • onelesd 2 days ago

    Sam and all the others. At this point, there should be required courses in college to teach this seemingly required skill to future corporate USA.

    • apwell23 2 days ago

      [flagged]

      • wheels 2 days ago

        Stripe was founded when Sam Altman was 25. Loopt, Sam's first company, was founded when Sam was 20, and was Sam's mechanism for meeting Paul Graham, so this story is pretty obviously wrong at some level.

  • apwell23 2 days ago

    [flagged]

    • wheels 2 days ago

      Weridly this is the second time this has been repeated in this thread, but Sam wasn't even a teenager anymore when he met PG the first time (but would have just turned 20), and was 25 when Stripe was founded, so your story is obviously wrong.

      • blast 2 days ago

        Right and what's this business about "gave him a 4% stake in stripe"? PG didn't own Stripe.

        • apwell23 a day ago

          How did a teenager get 4% stake for 15k and millions for his (failed) startup if PG had nothing to do with it.

          I was working at dunkin donuts when i was a teenager.

      • apwell23 a day ago

        > Sam wasn't even a teenager anymore when he met PG the first time (but would have just turned 20),

        20 yr age gap in a relationship is still frowned upon in our society unfortunately.

        Also I guess you were tracking sam's legal age closer than Sam himself because he mentioned that he was a teenager when he met PG in an interview. SV is creepier than anyone could ever imagine.

        • wheels a day ago

          I was going based on Wikipedia. Sam was 20 when his YC batch started, and also would have already been 20 for the YC interview. But it was close -- a few months, so it wouldn't surprise me if that gets elided in a dramatic retelling.

          I'm not sure how it's relevant that you worked in a donut shop? Surely you're aware that isn't the peak of over-achievement at the age of 20? God, watch the Olympics if you want to see a bunch of very determined young people. Or go to an Open Source conference. Or, as it were, YC. Sam was only slightly younger than normal there. Some 20 year olds have accomplished a lot.

          You seem to really want to create a villain out of the situation, which is kind of weird. Mentors and investors are usually older, and 20 years isn't rare. That's hardly Silicon Valley specific.

          You also repeated the false thing about him getting a stake in Stripe as a teenager again. Again, Stripe came into existence when Sam was 25. There's just no version of that story that works. I don't know about Sam's Stripe investment, but at that point he was already around YC a lot, even though Loopt was still going. He probably just got in with other angel investors. But at that point he was in his mid-20s and running a Sequoia-backed company, so that's not especially weird.

          (There's genuine stuff to be critical of in the trajectory of OpenAI, but this seems like a really weird spot to latch onto.)

          • apwell23 a day ago

            > Sam was 20 when his YC batch started, and also would have already been 20 for the YC interview.

            PG:

            "Sam Altman, the co-founder of Loopt, had just finished his sophomore year when we funded them, and Loopt is probably the most promising of all the startups we've funded so far. But Sam Altman is a very unusual guy. Within about three minutes of meeting him, I remember thinking 'Ah, so this is what Bill Gates must have been like when he was 19.'"

            Are you being sarcastic by comparing sam with olympians?

            What exactly did Sam accomplish when he met PG to declared as "bill gates" or to get 50 million for his startup within 15 minutes of meeting PG ?

            He was practicing being "Michael jordan of listening" ( another PG quote) since he was 5 like olympians ?

            • wheels a day ago

              You're really just making shit up here. He didn't get $50 million for his startup within 15 minutes of meeting Paul Graham. YC's deal back then was for about $15k per startup for 7%. Loopt didn't even over its several years raise $50 million -- it was around $30 million, and that happened over a space of several rounds over several years.

              I was in YC a few batches later and met Paul Graham and Sam in that era. I remember walking around San Fracisco with Sam and him telling me about Loopt. He was a few years younger than me (I was 29, he was 24), and I remember being impressed by him.

              And it's possible that Sam listed his age at 19 on his YC application, and that's what PG was going on. He would have probably still been 19 when he filled out the application. Again, this isn't hard to verify -- his birthday is on Wikipedia, and he was in the summer batch of 2005. Interviews are about a month before the batch starts. But there's not really a lot of my point that hinges on if it was a month before or a month after his birthday when they met. More my point was that the stuff about him investing in Stripe as a teenager because PG "gave" it to him is completely bogus.

              It really seems like you have an axe to grind here, and I'm not completely sure why. Again, I think some of the stuff that's happened later in OpenAI is worthy of criticism, but that doesn't mean you have to reinterpret everything that happened before that through some bogyman lens.

              • apwell23 a day ago

                > It really seems like you have an axe to grind here, and I'm not completely sure why.

                Yea because ppl getting unfair leg up because they were chosen as the "next bill gates" by a SV white male because they look like them is merely an "axe to grind".

                You still haven't answered why you think he is like an olympian when he met PG other than "He is impressive because he is impressive".

                I feel like i am in some kind of weirdo land here with totally ridiculous boasts about someone that no one can name an actual accomplishment

                > like an olympian

                > michaal jordan of listening

                > bill gates at 19

                > his brain will be cloned by 2029

                You guys need to send this to HBO for next Silicon Valley season.

                • wheels a day ago

                  I wasn't comparing Sam to an Olympian; I was comparing you to one. Just because you were working in a donut shop at that age doesn't mean that that's the benchmark for achievement. Some people go to the Olympics. I honestly don't know enough about Sam's achievements before then to know if he'd done impressive things.

                  The whole "white man" thing is also a complete straw man. The other two YC-founders-turned YC CEOs of that era were Michael Seibel and Garry Tan, neither of whom are white.

                  It sounds like what you're offended by is the whole YC process -- that there are quick interviews that (back then) translated to small amounts of funding -- that literally the decision was made in a single interview. But that wasn't anything specific to Sam; that's how it worked for everyone. You can find that stupid if you want to, but then might I suggest this is an odd forum to hang out on if you find that to be offensive?

                  • apwell23 7 hours ago

                    > I was comparing you to one. Just because you were working in a donut shop at that age doesn't mean that that's the benchmark for achievement. Some people go to the Olympics. I honestly don't know enough about Sam's achievements before then to know if he'd done impressive things.

                    Exactly. I was comparing myself to him when i mentioned that I worked at a donut shop. At that age I ( and many others) were indistinguishable from him. I went to an ivy league too btw.

                    Your olympian thing is absurd here because a teenager destined to be an olympian is indeed very distinguishable from his/her peers very easily, ppl can tell why this person is special.

                    You keep saying Sam was special ( next bill gates) but fail to tell me why. I asked you multiple times too , but instead you keep attacking me instead for not accepting circular "he is impressive because he is impressive" .

                    > But that wasn't anything specific to Sam; that's how it worked for everyone.

                    I just told you but you keep ignoring. Did PG mention anyone else was bill gates or Michael Jordan to his VC friends and publicly in interviews. (Ironic, given Michael Jordan is one the most impressive athletes that came from no where ). Or continuously give him leg up despite failed ventures ( loopt or whatever). I cannot think of anyone else who failed upwards like Sam because he had PG to back him with ridiculous and vacuous pumping of Sam's so called genius.

                    • wheels 6 hours ago

                      Honestly: it's not worth my time to keep arguing with you. You've not taken any accountability for the several demonstrably false claims you've made here.

                      • apwell23 5 hours ago

                        I didnot make a "claim" I merely quoted PG and Sam about the age when they met. You seem to obsessed about him being in early 20s, not sure why thats relevant here or why its so important to you. You should tell that PG to go issue a correction and "take responsibility" about misquoting Sam's age if thats so important to you. I don't have creepy obsession with teenager's ages.

                        Oh yea you would rather run way and feel smug about some pedantic age thing than substantiate why you think same was "impressive" when he met 20s.

    • baoha 2 days ago

      To be fair both of them probably didn't imagine Stripe would be the one today. You can apply the same logic for any successful companies, like the guy who gave up 10% of Apple for some changes.

      • manquer 2 days ago

        There is a difference between not imagining it will be valued at 100B and not imagining it will be 1B+ or a 100M exit.

        It is quite likely they knew the latter as relatively low risk expected outcome .

        Even at 100M exit, which by valley standards (even in 2010s) is not a lot, 1-2% (after further rounds of dilution) would have yielded 1-2M return . A 200x return for very little downside i.e. a gift .

        There is a reason why there is FOMO and little due diligence for really hot startups amongst VCs , most times it is about access to the round which is difficult rather than risk of returns, we only read about the spectacular failures like FTX . We don’t hear about the Stripe, AirBnb, or Figma, OpenAI or spaceX funding rounds .

      • apwell23 2 days ago

        > probably didn't imagine Stripe would be the one today

        I guess being michael jordan of listening does't help with imagination

georgeplusplus 2 days ago

I never understood why people take non profit companies as more altruistic than for profit companies. The non profit doesnt mean no profits at all they still have to be profitable. It's just boils down to how the profits are distributed. There are plenty of sleezy institutions that are non profit like the NCAA.

Foundations and charitable organizations that pubically get their funding are a different story but I'm talking about non profit companies.

I even had one fellow say that the green bay packers were less corrupt than the other for profit nfl teams , which sounds ridiculous.

  • hedora 2 days ago

    Regarding the Packers: At least (unlike literally every other NFL team), they’re not using city tax revenue to build a franchise that can move across the country at the drop of a hat.

    The NFL’s non-profit status is a farce though. Similarly, their misuse of copyright (“you cannot discuss this broadcast”) and the trademark “Super Bowl” (“cannot be used in factual statements regarding the actual Super Bowl”) should have their ownership of that ip revoked, if only because it causes massive confusion about the underlying law with a big chunk of the US population.

thesurlydev 2 days ago

I can't help but wonder if things would be different if Sam Altman wasn't allowed to come back to OpenAI. Instead, the safeguards are gone, challengers have left the company, and the bottom line is now the new priority. All in opposition to ushering in AI advancement with the caution and respect it deserves.

  • elAhmo 2 days ago

    Similar example can be seen with the demise of Twitter under the new owner, which has no safeguards or guardrails - anyone who opposed him is gone and we can see in what state it is now.

    • senorrib 2 days ago

      With the small difference that Twitter is a for-profit company, unlike OpenAI.

      • burnte 2 days ago

        With respect, you should look again at the article you're commenting on.

        • freeqaz 2 days ago

          This comment gutted me, lmao.

    • MaxHoppersGhost 2 days ago

      Now Twitter has both left and right propaganda instead of just left wing propaganda. Bummer.

  • IAmNotACellist 2 days ago

    What else would you expect from a skeevy backstabber who got kicked out of Kenya for refusing to stop scanning people's eyes in exchange for shitcoin crypto? He was building a global surveillance database with Worldcoin.

    Altman was fucking with OpenAI for long before the board left in protest, since about the time Elon Musk had to leave due to Tesla's AI posing a conflict of interest. He got more and more brazen with the whole fake-altruism shit, up to and including contradicting every point in their mission statement and promise to investors in the "charity."

  • matt3210 2 days ago

    The bottom line was always the priority.

  • wonnage 2 days ago

    Maybe my expectations were too high but they seem to have run out of juice. Every major announcement since the original ChatGPT release has been kind of a dud - I know there have been improvements, but it's mostly the same hallucinatory experience as it was on release day. A lot of the interesting work is now happening elsewhere. It seems like for a lot of products, the LLM part is just an API layer you can swap out if you think e.g Claude does a better job.

  • surfingdino a day ago

    Things would be different for sure. I wonder if people leaving OpenAI has something to do with the prosaic comparison of what they are getting (a salary) and what he's getting (a cool $10B at the current valuation).

  • lyu07282 2 days ago

    It was always a bit too optimistic to think we will be cautiously developing AGI, in a way it's not so bad that this happened so soon rather than later after it progressed much further. (I mean in theory we could understand to do something about it now.)

    Although I guess it doesn't really matter. What if we all understood climate change earlier? wouldn't really have made a difference anyway

phito 2 days ago

I know nothing about companies (esp. in the US), but I find it weird that a company can go from non-profit to for-profit? Surely this would be taken advantage of. Can someone explain me how this work?

  • Havoc 2 days ago

    That was the point musk was complaining about.

    In practice it’s doable though. You can just create a new legal entity and move stuff and/or do future value creating activity in the new co. IF everyone is on board with the plan on both sides of the move then that’s totally doable with enough lawyers and accountants

    • mminer237 2 days ago

      If the non-profit is on board with that though, then they're breaking the law. The IRS should reclassify them as a for-profit for private inurement and the attorney general should have the entire board removed and replaced.

      • throwup238 2 days ago

        OpenAI Global, LLC - the entity that actually employs all the engineers, makes revenue from ChatGPT and the API, and pays taxes - has been a for-profit corporation since at least 2019: https://en.wikipedia.org/wiki/OpenAI#2019:_Transition_from_n...

        The IRS isn’t stupid. The rules on what counts as taxable income and what the nonprofit can take tax-free have been around for decades.

        • xpe 2 days ago

          Whatever you think of the IRS, they aren't the master of their own destiny:

          https://www.propublica.org/article/how-the-irs-was-gutted (2018)

          > An eight-year campaign to slash the agency’s budget has left it understaffed, hamstrung and operating with archaic equipment. The result: billions less to fund the government. That’s good news for corporations and the wealthy.

        • mminer237 2 days ago

          Still, if the non-profit has private inurement, the non-profit shouldn't be able to take anything tax-free as it wouldn't qualify as a 501(c)(3). The bigger issue is definitely Delaware non-profit law though.

    • duchenne 2 days ago

      But, if the non-profit gives all its assets to the new legal entity, shouldn't the new legal entity be taxed heavily? The gift tax rate goes up to 40% in the US. And 40% of the value of openAI is huge.

      • baking 2 days ago

        A non-profit can't give away its assets to a private entity, but it can exchange its assets for fair value, in this case, equity in the for-profit.

      • SkyPuncher 2 days ago

        You don't need to sell/give the assets away to allow the for-profit to use them.

        You sign an exclusive, non-revocable licensing agreement. Ownership of the original IP remains 100% with the original startup.

        Now, this only works if the non-profit's board is on-board.

    • 0xDEAFBEAD 2 days ago

      ICYMI, Elon Musk restarted his lawsuit a month or two ago: https://www.reuters.com/technology/elon-musk-revives-lawsuit...

      I'm wondering if OpenAI's charter might provide a useful legal angle. The charter states:

      >OpenAI’s mission is to ensure that [AGI ...] benefits all of humanity.

      >...

      >We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

      >Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.

      >...

      >We are committed to doing the research required to make AGI safe, and to driving the broad adoption of such research across the AI community.

      >We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. [...]

      >...

      https://openai.com/charter/

      I'm no expert here, but to me, this charter doesn't appear to characterize OpenAI's behavior as of the year 2024. Safety people have left, Sam has inexplicably stopped discussing risks, and OpenAI seems to be focused on racing with competitors. My question: Is the charter legally enforceable? And if so, could it make sense for someone to file an additional lawsuit? Or shall we just wait and see how the Musk lawsuit plays out, for now?

      • mminer237 2 days ago

        It would think it is legally enforceable, but I suspect Kathy Jennings is the only person who has standing to sue over it.

        • 0xDEAFBEAD 2 days ago

          So perhaps we can start a campaign of writing letters to her?

          I'm curious about the "fiduciary duty" part. As a member of humanity, it would appear that OpenAI has a fiduciary duty to me. Does that give me standing? Suppose I say that OpenAI compromises my safety (and thus finances) by failing to discuss risks, having a poor safety culture (as illustrated by employee exits), and racing. Would that fly?

          • mminer237 2 days ago

            Under Lujan v. Defenders of Wildlife, you have to suffer a concrete, discernible injury. They can have broken their promise to you, but unless you can prove the dollar amount that harmed you, you can't sue.

            Even if you donated to them, all states I know of assign sole oversight for proper management of those funds to the state AG. If you donate to a food bank and they use the money to buy personal Ferraris instead of helping the hungry, that's clearly illegal, but you'd be out the money either way, so you wouldn't have standing to sue. The attorney general has to sue for mismanagement of funds. If you feel OpenAI is violating their charter, I would definitely encourage writing to Mrs. Jennings to voice that opinion.

        • cdchn 2 days ago

          "Humanity vs. OpenAI" would look good on a docket.

        • pclmulqdq 2 days ago

          Elon Musk absolutely has standing, as one of the biggest donors to the nonprofit. I assume he will settle for some ownership in the for-profit, though.

          • mminer237 2 days ago

            Maybe. In California that has been ruled to not be the case: https://www.thetaxadviser.com/issues/2021/sep/donor-no-stand...

            I don't know the laws of Delaware well, but I would be surprised if he has standing even as a donor.

            • pclmulqdq a day ago

              That was also specifically about a donor-advised fund, which is different than a nonprofit corporation. Elon Musk's tort would be something like "fraud in the inducement" or some weird theory like that not for a breach of fiduciary duty.

          • 0xDEAFBEAD a day ago

            I suppose Open Philanthropy does as well, then.

          • melodyogonna 2 days ago

            Didn't he already refuse the shares offered to him?

            • pclmulqdq 2 days ago

              I'm sure they just didn't offer him enough shares.

      • anigbrowl 2 days ago

        LOL, it's like a bank with stained glass windows to make passers-by think it's a church, isn't it.

      • whamlastxmas 2 days ago

        Sam had a blog post literally two days ago that acknowledged risks. There’s also still a sizeable focus on safety and people with roles dedicated to it at open ai

        • johnsimer 2 days ago

          Is there a sizable focus on safety? Last time I heard there was only like one safety person left on the team

    • crystal_revenge 2 days ago

      > That was the point musk was complaining about.

      I think the real issue Musk was complaining about is that sama is quickly becoming very wealthy and powerful and Musk doesn't want any competition in this space.

      Hopefully some people watching all this realize that the people running many of these big AI related projects don't care about AI. Sam Altman is selling a dream about AGI to help make himself both wealthier and more powerful, Elon Musk is doing the same with electric cars or better AI.

      People on HN are sincerely invested in the ideas behind these things, but it's important to recognize that the people pulling the strings largely don't care outside how it benefits them. Just one of the many reasons, at least in AI, truly open source efforts are essential for any real progress in the long run.

      • squidsoup 2 days ago

        The notion that consciousness is going to emerge in a system where neurons are modelled as bits is laughable.

        • steveoscaro 2 days ago

          The famous last word of humanity before skynet wakes up (most joking, but only mostly)

  • moralestapia 2 days ago

    It's not weird, it's illegal.

    There's a lot of jurisdiction around preventing this sort of abuse of the non-profit concept.

    The reason why the people involved are not on trial right now is a bit of a mystery to me, but could be a combination of:

    * Still too soon, all of this really took shape in the past year or two.

    * Only Musk has sued them, so far, and that happened last month.

    * There's some favoritism from the government to the leading AI company in the world.

    * There's some favoritism from the government to a big company from YC and Sam Altman.

    I do believe Musk's lawsuit will go through. The last two points are worth less and less with time as AI is being commoditized. Dismantling OpenAI is actually a business strategy for many other players now. This is not good for OpenAI.

    • tomp 2 days ago

      > Dismantling OpenAI is actually a business strategy for many other players now.

      Which ones exactly?

      NVIDIA is drinking sweet money from OpenAI.

      Microsoft & Apple are in cahoots with it.

      Meta/Facebook seems happy to compete with OpenAI on a fair playing field.

      Anthropic lacks the resources.

      Amazon doesn't seem to care.

      Google is asleep.

      • photonthug 2 days ago

        Meta has to be happy someone else is currently looking as sketchy as they are. Thus the business strategy is moving to limit their power and influence as much as possible while also avoiding any appearance of direct competition, and letting the other guy soak up the bad pr.

        Amazon gets paid either way, because even if open ai doesn’t use them, where are you going to cloud your api that’s talking with open ai?

        If open ai looks weakened I think we’ll see everyone else has a service they want you to try. But there’s no use in making much noise about that, especially during an election year. No matter who wins, all the rejected everywhere will blame AI, and who knows what that will look like. So, sit back and wait for the leader of the pack to absorb all the damage.

      • kranke155 2 days ago

        Google is asleep? Gemini is the product of a company that's asleep?

        • 9dev 2 days ago

          Gemini is the product of a company that is still half-asleep. We’re trying to work with it on a big data case, and have seen everything, from missing to downright wrong documentation, missing SDKs and endpoints, random system errors and crashes, clueless support engineers… it’s a mess.

          OpenAI is miles ahead in terms of ecosystem and platform integration. Google can come up with long context windows and cool demos all they want, OpenAI built a lot of moat while they were busy culling products :)

          • kranke155 2 days ago

            Fair enough.

            I didn't realise it was that bad.

        • throwup238 2 days ago

          You’re right, Gemini is more of a product from a company in a vegetative state.

        • fourseventy 2 days ago

          Gemini thinks the founding fathers of america were black and that the nazis were racially diverse. so ya

      • moralestapia 2 days ago

        >NVIDIA is drinking sweet money from OpenAI.

        NVIDIA makes money from any company doing AI. I would be surprised if OpenAI was a whole digit percentage of their revenue.

        >Microsoft & Apple are in cahoots with it.

        Nope. Apple is using OpenAI to fill holes their current model is not good at. This doesn't sound like a long-term partnership.

        >Meta/Facebook seems happy to compete with OpenAI on a fair playing field.

        They want open source models to rule, obliterating proprietary models out of existence, while at it.

        >Anthropic lacks the resources.

        Hence why it would be better for them if OpenAI would not exist. It's the same with all other AI companies out there.

        >Amazon doesn't seem to care.

        Citation needed, AWS keeps putting out products which are their market leaders, they just don't make a big fuzz about it.

        >Google is asleep.

        I'll give you this one. I have no idea why they keep Pichai around.

        • Thrymr 2 days ago

          > I would be surprised if OpenAI was a whole digit percentage of their revenue.

          It is not publicly known how much revenue Nvidia gets from OpenAI, but it is likely more than 1%, and they may be one of the top 4 unnamed customers in their 10Q filing, which would mean at least 10% and $3 billion [0].

          That's not nothing.

          [0] https://www.yahoo.com/tech/nvidia-gets-almost-half-revenue-0...

        • cdchn 2 days ago

          >I would be surprised if OpenAI was a whole digit percentage of their revenue.

          As opposed to? The euphemism "I wouldn't be surprised" usually means you think what you're saying. If you negate that you're saying what you _don't_ think is the case? I may be reading too much into whats probably a typo.

          • stonogo 2 days ago

            I read it as "I would be surprised if OpenAI were spending enough to consitute even 1% of nVIDIA's revenue."

  • xwowsersx 2 days ago

    At first, I thought, “Wow, if companies can start as nonprofits and later switch to for-profit, they’ll exploit the system.” But the more I learned about the chaos at OpenAI, the more I realized the opposite is true. Companies will steer clear of this kind of mess. The OpenAI story seems more like a warning than a blueprint. Why would any future company want to go down this path?

    • xiphias2 2 days ago

      It's quite simple: the talent pool that had already enough money that they quit their well paying job at a for profit company in part because they wanted to continue working at a non-profit high impact.

      As OpenAI found its product-market fit, the early visionaries are not needed anymore (although I'm sure the people working there are still amazing)

      • cdchn 2 days ago

        I think OpenAI took this play right out of one of its founding donors playbooks. Pretend your company has lofty goals and you can get people to compromise to moral relativism and work superduper hard for you. These people definitely have framed posters with the “If you want to build a ship, don’t drum up the men to gather wood, divide the work, and give orders. Instead, teach them to yearn for the vast and endless sea" quote somewhere in their living places/workspaces.

    • insane_dreamer 2 days ago

      > Why would any future company want to go down this path?

      most would happily sell their soul and deal with any mess to reach $150B valuation

  • blackeyeblitzar 2 days ago

    It is going to be taken advantage of. Musk and others have criticized this “novel” method of building a company. If it is legal then it is a puzzling loophole. But another way to look at it is it gives small and vulnerable companies a chance to survive (with different laws and taxes applying to the initial nonprofit). If you look at it as enabling competition against the big players it looks more reasonable.

    • entropi a day ago

      >But another way to look at it is it gives small and vulnerable companies a chance to survive (with different laws and taxes applying to the initial nonprofit).

      I feel like this is quite a slippery slope, though. Should we also give small companies a right to violate trademarks? Copyright? Kill people? These could also give them a chance to compete against big players.

  • csomar 2 days ago

    I am not a tax specialist but from my understanding a non-profit is a for-profit that doesn't pay dividends. Why would the government care?

    • freedomben 2 days ago

      No, a non-profit is one in which there are no shareholders. The non-profit entity can own a lot and be extremely successful and wealthy, but it cannot give that money to any shareholders. It can pay out large salaries, but those salaries are scrutinized. It doesn't prevent abuse, and it certainly doesn't prevent some unscrupulous person from becoming extremely wealthy with a non-profit, but it is a little more complicated and limiting than you would think. Also, you get audited with routine regularity and if you are found in violation you lose your tax-exempt status, but you still are not a for-profit.

      • bbor 2 days ago

        Yes: non-profits usually have members, not shareholders.

        And, most importantly: non-profit charities (not the only kind of nonprofit, but presumably what OpenAI was) are legally obligated to operate “for the public good”. That’s why they’re tax exempt: the government is basically donating to them, with the understanding that they’re benefiting the public indirectly by doing so, not just making a few people rich.

        In my understanding, this is just blatant outright fraud that any sane society would forbid. If you want to start a for-profit that’s fine, but you’d have to give away the nonprofit and its assets, not just roll it over to your own pocketbook.

        God I hope Merrick Garland isn’t asleep at the wheel. They’ve been trust busting like mad during this administration, so hopefully they’re taking aim at this windmill, too.

        • philwelch 2 days ago

          > God I hope Merrick Garland isn’t asleep at the wheel. They’ve been trust busting like mad during this administration, so hopefully they’re taking aim at this windmill, too.

          Little chance of that as Sama is a big time Democrat fundraiser and donor.

          • bbor 2 days ago

            So are Google and Facebook :shrug:

            Can’t find a good source for both rn but this one has alphabet in the top 50 nationwide for this election: https://www.opensecrets.org/elections-overview/top-organizat...

            edit: and Sam Altman isn’t exactly donating game changing amounts — around $300K in 2020, and seemingly effectively nothing for this election. That’s certainly nothing to sneeze at as an individual politician, but that’s about 0.01% of his net worth (going off Wikipedia’s estimate of $2.8B, not counting the ~$7B of OpenAI stock coming his way).

            https://www.dailydot.com/debug/openai-sam-altman-political-d...

            • philwelch 2 days ago

              > So are Google and Facebook

              When you see any numbers for corporations contributing to political campaigns, that's actually just measuring the contributions from the employees of those corporations. That's why most corporations "donate to both parties"--because they employ both Republicans and Democrats.

      • whamlastxmas 2 days ago

        I’m not sure extreme wealth is possible with a non-profit. You can pay yourself half a million a year, get incredible kickbacks by the firms you hire to manage the nonprofits investments, have the non-profit hire outside companies that you have financial interests in, and probably some other stuff. But none of these things are going to get you a hundred million dollars out of a non profit. The exception seems to be OpenAI which is definitely going to be netting at least a couple people over a billion dollars, but as Elon says, I don’t understand how or why this is possible

        • freedomben 2 days ago

          Yes definitely that is the far majority. I actually had Mozilla and their CEO in mind when I was thinking of "extreme" wealth. Also I've heard some of the huge charities in the US have some execs pulling down many millions per year, but I don't want to name any names because I'm not certain.

          • blendergeek 2 days ago

            In the USA, the salaries of execs of non-profits are publicly listed in their form 990s they file with the IRS.

            Name names. We can look it up.

      • csomar 2 days ago

        > No, a non-profit is one in which there are no shareholders.

        Again, I am not a lawyer but that makes no sense. Otherwise, anyone can claim the non-profit? So clearly there are some beneficial owners out there somehow.

        • blackhawkC17 2 days ago

          The nonprofit is controlled by trustees and bound by its charter, not shareholders. Any profit a nonprofit organization makes is retained within the organization for its benefit and mission, not paid out to shareholders.

      • antaviana 2 days ago

        Has OpenAI been profitable so far? If not, is there any subtantial tax that you have to pay in the US as a for-profit organization if you are not profitable?

    • sotix 2 days ago

      A non-profit is a company that for accounting purposes does not have shareholders and therefore keeps nothing in retained earnings at the end of the period. The leftover money must be distributed (e.g. as salaries, towards the stated mission, etc.). Their financial statements list net profit for the period and nothing is retained.

      • matwood 2 days ago

        The money doesn't have to be used. Many non-profits have very large balance sheets of cash and cash equivalent assets. The money just won't be paid out as dividends to shareholders.

        • sotix a day ago

          Correct. They carry a net assets balance at the end of the period. But they do not retain earnings because there are no shareholders to pay out.

    • jprete 2 days ago

      That's not correct, they also have tax advantages and a requirement to fulfill their charter.

    • moralestapia 2 days ago

      Non-profits are tax-exempt, that's why they're carefully[1] regulated.

      1: In principle; in practice, well, we'll see with this one!

    • brap 2 days ago

      Isn’t transferring all of your value to a for-profit company that can pay dividends, kinda the same thing?

  • m3kw9 2 days ago

    The NFL used to be a nonprofit and now for profit. OpenAI can use similar routes

    • walthamstow 2 days ago

      Not an accountant but there are different kinds of nonprofits, OpenAI is a 501c3 (religious/charitable/educational) whereas the NFL was a 501c6 (trade association).

      Obviously we all think of the NFL as a big money organisation, but it basically just organises the fixtures and the referees. The teams make all the money.

      • FireBeyond 2 days ago

        If you want to be pedantic, in legal terms, no, the NFL is a big money org ($13B+/yr in revenue, with a commissioner earning $65M/yr).

        They pay dividends to the teams, however, yes. But all that revenue (which is distinct from team revenue) is actually legally earned by the NFL itself.

        • walthamstow a day ago

          65m a year... Wow! Richard Masters, the chief exec of the most watched, most popular sports league in the world is on about 2m a year.

srvmshr 2 days ago

It seemed only a matter of time, so it isn't very surprising. Capped profit company running expensive resources on Internet scale, and headed by Altman wasn't going to last forever in that state. That, or getting gobbled by Microsoft.

Interesting timing of the news since Murati left today, gdb is 'inactive' and Sutskevar has left to start his own company. Also seeing few OpenAI folks announcing their future plans today on X/Twitter

DebtDeflation 2 days ago

Wouldn't surprise me if this was the actual cause of the revolt that led to Altman's short-lived ouster, they just couldn't publicly admit to it so made up a bunch of other nonsensical explanations.

Mistletoe 2 days ago

Feels like when Napoleon declared himself emperor, and other countless times when humans succumbed to power and greed when they were finally in the position to make that decision. I guess I’m stupid for holding on hope that Sam would be different.

>Beethoven's reaction to Napoleon Bonaparte's declaration of himself as Emperor of France in May 1804 was to violently tear Napoleon's name out of the title page of his symphony, Bonaparte, and rename it Sinfonia Eroica

>Beethoven was furious and exclaimed that Napoleon was "a common mortal" who would "become a tyrant"

code51 2 days ago

Are the previous investments counting as "donations" still? Elon must have something to say...

  • Maledictus 2 days ago

    And the tax office! If this works, many companies will be founded as non-profit first in the future.

game_the0ry 2 days ago

OpenAI founded as non-profit. Sam Altman goes on Joe Rogan Podcast and says he does not really care about money. Sam gets caught driving around Napa in a $4M exotic car. OpenAI turns into for-profit. 3/4 of founding team dips out.

Sketchy.

This whole silicon valley attitude of fake effective altruism, "I do it for the good of humanity, not for the money (but I actually want a lot of money)" fake bullshit is so transparent and off-putting.

@sama, for the record - I am not saying making is a bad thing. Labor and talent markets should be efficient. But when you pretend to be altruistic when you are obviously not, then you come off hypocritical instead of altruistic. Sell out.

  • moozilla 2 days ago

    Couldn't find the JRE clip, but here's a recent one where he says "I don't really need more money." This is how I always understood it, he's already worth billions from past ventures, what difference does a stake in OpenAI make?

    https://www.youtube.com/watch?v=PScOZzzXnDA

  • peanuty1 2 days ago

    Regarding the 4 million dollar car, Sam already made a ton of money from Reddit and being President of YC.

    • game_the0ry 2 days ago

      Liking and buying expensive cars is not wrong.

      But buying a $4M car while saying you do not car about money is a mis-alignment between words and actions, which comes off untrustworthy.

      • MaxHoppersGhost 2 days ago

        Maybe he meant he doesn’t care about it so he wastes it on super expensive things. Simple definitional misunderstanding

  • klabb3 2 days ago

    I swear the reason why we have so many sociopaths is because how goddamn easy it is to fool people, it’s like stealing candy from kids. Just put on the pseudo intellectual mask and say that you care deeply about grandeur issue X, and people will just believe you at face value, despite your entire track record showing you care only about power, money and status.

mtlmtlmtlmtl 2 days ago

The most surprising thing to me in this is that the non-profit will still exist. Not sure what the point of it is anymore. Taken as a whole, OpenAI is now just a for-profit entity beholden to investors and Sam Altman as a shareholder. The non-profit is really just vestigial.

I guess technically it's supposed to play some role in making sure OpenAI "benefits humanity". But as we've seen multiple times, whenever that goal clashes with the interests of investors, the latter wins out.

  • bayindirh 2 days ago

    > The most surprising thing to me in this is that the non-profit will still exist.

    That entity will scrape the internet and train the models and claim that "it's just research" to be able to claim that all is fair-use.

    At this point it's not even funny anymore.

    • lioeters 2 days ago

      Scraping the entire internet for training data without regard for copyright or attribution - specifically to use for generative AI to produce similar content for profit. How this is being allowed to happen legally is baffling.

      It does suit the modus operandi of a number of American companies that start out as literally illegal/criminal operations until they get big and rich enough to pay a fine for their youthful misdeeds.

      By the time some of them get huge, they're in bed with the government to dominate the market.

      • mdgrech23 2 days ago

        The people running the show are well connected and stand to make billions as do would be investors. Give a few key players a share in the company and they forget their government jobs to regulate.

        • SoftTalker 2 days ago

          They are also moving so much faster than the regulators and legislatures, it's just impossible for people working basically the same way they did in the 19th century to keep up.

        • barbazoo 2 days ago

          More likely the legal system just hasn’t caught up.

          • llm_trw 2 days ago

            Maybe, but for the first time in a century there is more money to be made in weakening copyright rather than strengthening it.

            • Terr_ 2 days ago

              That's an interesting way to look at it, however on reflection I think I usually wanted to "weaken copyright" because it would empower individuals versus entrenched rent-seeking interests.

              If it's only OK to scrape, lossy-compress, and redistribute book-paragraphs when it gets blended into a huge library of other attempts, then that's only going to empower big players that can afford to operate at that scale.

            • archagon 2 days ago

              The big companies will sign lucrative data sharing deals with each other and build a collective moat, while open source models will be left to rot. Copyright for thee but not for me.

            • dingnuts 2 days ago

              god forbid that actually be happening in a way to improve the commons

            • vezycash 2 days ago

              > for the first time in a century there is more money to be made in weakening copyright rather than strengthening it

              Nope. The law will side with whoever pays the most. Once OpenAI solidifies its top position, only then will regulations kick in. Take YouTube, for example—it grew thanks to piracy. Now, as the leader, ContentID and DMCA rules work in its favor, blocking competition. If TikTok wasn’t a copyright-ignoring Chinese company, it would’ve been dead on arrival.

              • Sakos 2 days ago

                We're already seeing it in things like Google buying rights to Reddit data for training. It's already happening. Only companies who can afford to pay will be building AI, so Google, Microsoft, Facebook, etc.

          • rayiner 2 days ago

            You’re both correct. The legal system has absolutely no idea how to handle the copyright issues around using content for AI training data. It’s a completely novel issue. At the same time, the tech companies have a lot more money to litigate favorable interpretations of the law than the content companies.

          • xpe 2 days ago

            Copyright concerns are only the tip of the iceberg. Think about the range of other harms and disruptions for countries and the world.

      • RIMR 2 days ago

        >How this is being allowed to happen legally is baffling.

        It's completely unprecedented.

        We allowed scraping images and text en masse when search engines used the data to let us find stuff.

        We allow copying of style, and don't allow writing styles and aesthetics to be copyrighted or trademarked.

        Then AI shows up, and people change lanes because they don't like the results.

        One of the things that made me tilt towards the side of fair use was a breakdown of the Stable Diffusion model. The SD2.1 base model was trained on 5.85 billion images, all normalized to 512x512 BMP. That's 1MB per images, for a total of 5.85PB of BMP files. The resulting model is only 5.2GB. That's more than 99.999999% data loss from the source data to the trained set.

        For every 1MB BMP file in the training dataset, less than 1byte makes it into the model.

        I find it extremely difficult to call this redistribution of copyrighted data. It falls cleanly into fair use.

        • ang_cire 2 days ago

          Except it's not just about redistribution of copyrighted data, it's about usage and obtainment. We don't get to obtain and use copyrighted content without permission, but they do? Hell no.

          Their arguments against this amounts to "we're not using it like they intend it to be used, so it's fine if we obtain it illegally", and that's a bs standard, totally divorced from any legal reality.

          Fair Use covers certain transformative uses, certainly, but it doesn't cover illegal obtaining of the content.

          You can't pirate a book just because you want to use it transformatively (which is exactly what they've done), and that argument would never hold up for us as individuals, so we sure as hell shouldn't let tech companies get a special carve-out for it.

      • jstummbillig 2 days ago

        It's not baffling at all. It's unprecedented and it's hugely beneficial to our species.

        The anti-AI stance is what is baffling to me. The path trotten is what got us here and obviously nobody could have paid people upfront for the wild experimentation that was necessary. The only alternative is not having done it.

        Given the path it has put as in, people either are insanely cruel or just completely detached from reality when it comes to what is necessary to do entirely new things.

        • anon7725 2 days ago

          > it's hugely beneficial to our species.

          Perhaps the biggest “needs citation” statement of our time.

          • Terr_ 2 days ago

            I can easily imagine people X decades from now discussing this stuff a bit like how we now view teeth-whitening radium toothpaste and putting asbestos in everything, or perhaps more like the abuse of Social Security numbers as authentication and redlining.

            Not in any weirdly-self-aggrandizing "our tech is so powerful that robots will take over" sense, just the depressingly regular one of "lots of people getting hurt by a short-term profitable product/process which was actually quite flawed."

            P.S.: For example, imagine having applications for jobs and loans rejected because all the companies' internal LLM tooling is secretly racist against subtle grammar-traces in your writing or social-media profile. [0]

            [0] https://www.nature.com/articles/s41586-024-07856-5

            • squigz 2 days ago

              > P.S.: For example, imagine having applications for jobs and loans rejected because all the companies' internal LLM tooling is secretly racist against subtle grammar-traces in your writing or social-media profile. [0]

              We don't have to imagine such things, really, as that's extremely common with humans. I would argue that fixing such flaws in LLMs is a lot easier than fixing it in humans.

              • Terr_ 2 days ago

                Fixing it with careful application of software-in-general is quite promising, but LLMs in particular are a terrible minefield of infinite whack-a-mole. (A mixed metaphor, but the imagery is strangely attractive.)

                I currently work in the HR-tech space, so suppose someone has a not-too-crazy proposal of using an LLM to reword cover-letters to reduce potential bias in hiring. The issue is that the LLM will impart its own spin(s) on things, even when a human would say two inputs are functionally identical. As a very hypothetical example, suppose one candidate always does stuff like writing out the Latin like Juris Doctor instead of acronyms like JD, and then that causes the model to end up on "extremely qualified at" instead of "very qualified at"

                The issue of deliberate attempts to corrupt the LLM with prompt-injection or poisonous training data are a whole 'nother can of minefield whack-a-moles. (OK, yeah, too far there.)

                • squigz 2 days ago

                  I don't think I disagree with you in principle, although I think these issues also apply to humans. I think even your particular example isn't a very far-fetched conclusion for a human to arrive at.

                  I just don't think your original comment was entirely fair. IMO, LLMs and related technology will be looked at similarly as the Internet - certainly it has been used for bad, but I think the good far outweighs the bad, and I think we have (and continue to) learn to deal with the issues with it, just as we will with LLMs and AI.

                  (FWIW, I'm not trying to ignore the ways this technology will be abused, or advocate for the crazy capitalistic tendency of shoving LLMs in everything. I just think the potential for good here is huge, and we should be just as aware of that as the issues)

                  (Also FWIW, I appreciate your entirely reasonable comment. There's far too many extreme opinions on this topic from all sides.)

            • 5040 2 days ago

              >lots of people suffered As someone surrounded by immigrants using ChatGPT to navigate new environs they barely understand, I don't connect at all to these claims that AI is a cancer ruining everything. I just don't get it.

              • Terr_ 2 days ago

                > immigrants using ChatGPT to navigate new environs

                To continue one of the analogies: Plenty of people and industries legitimately benefited from the safety and cost-savings of asbestos insulation too, at least in the short run. Even today there are cases where one could argue it's still the best material for the job--if constructed and handled correctly. (Ditto for ozone-destroying chlorofluorocarbons.)

                However over the decades its production and use grew to be over/mis-used in so very many ways, including--very ironically--respirators and masks that the user would put on their face and breathe through.

                I'm not arguing LLMs have no reasonable uses, but rather that there are a lot of very tempting ways for institutions to slot them in which will cause chronic and subtle problems, especially when they are being marketed as a panacea.

            • hadlock 2 days ago

              > Not in any weirdly-self-aggrandizing "our tech is so powerful that robots will take over" sense, just the depressingly regular one of "lots of people getting hurt by a short-term profitable product/process which was actually quite flawed."

              We have a term for that, it's called "luddite". Those were english weavers who would break in to textile factories and destroy weaving machines at the beginning of the 1800s. With the extreme rare exception, all cloth is woven by machines now. The only hand made textiles in modern society are exceptionally fancy rugs, and knit scarves from grandma. All the clothing you're wearing now are woven by a machine, and nobody gives this a second thought today.

              https://en.wikipedia.org/wiki/Luddite

              • jrflowers 2 days ago

                > We have a term for that, it's called "luddite"

                The Luddites were actually a fascinating group! It is a common misconception that they were against technology itself, in fact your own link does not say as much, the idea of “luddite” being anti-technology only appears in the description of the modern usage of the word.

                Here is a quote from the Smithsonian[1] on them

                >Despite their modern reputation, the original Luddites were neither opposed to technology nor inept at using it. Many were highly skilled machine operators in the textile industry. Nor was the technology they attacked particularly new. Moreover, the idea of smashing machines as a form of industrial protest did not begin or end with them.

                I would also recommend the book Blood in the Machine[2] by Brian Merchant for an exploration of how understanding the Luddites now can be of present value

                1 https://www.smithsonianmag.com/history/what-the-luddites-rea...

                2 https://www.goodreads.com/book/show/59801798-blood-in-the-ma...

              • sahmeepee 2 days ago

                I'm not sure that Luddites really represent fighting against a process that's flawed, as much as fighting against one that's too effective.

                They had very rational reasons for trying to slow the introduction of a technology that was, during a period of economic downturn, destroying a source of income for huge swathes of working class people, leaving many of them in abject poverty. The beneficiaries of the technological change were primarily the holders of capital, with society at large getting some small benefit from cheaper textiles and the working classes experiencing a net loss.

                If the impact of LLMs reaches a similar scale relative to today's economy, then it would be reasonable to expect to see similar patterns - unrest from those who find themselves unable to eat during the transition to the new technology, but them ultimately losing the battle and more profit flowing towards those holding the capital.

              • Terr_ 2 days ago

                > We have a term for that, it's called "luddite".

                No, that's apples-to-oranges. The goals and complaints of Luddites largely concerned "who profits", the use of bargaining power (sometimes illicit), and economic arrangements in general.

                They were not opposing the mechanization by claiming that machines were defective or were creating textiles which had inherent risks to the wearers.

                • codetrotter 2 days ago

                  > complaints of Luddites largely concerned "who profits", the use of bargaining power (sometimes illicit), and economic arrangements in general

                  I have never thought of being anti-AI as “Luddite”, but actually this very description of “Luddite” does sound like the concerns are in fact not completely different.

                  Observe:

                  Complaints about who profits? Check; OpenAI is earning money off of the backs of artists, authors, and other creatives. The AI was trained on the works of millions(?) of people that don’t get a single dime of the profits of OpenAI, without any input from those authors on whether that was ok.

                  Bargaining power? Check; OpenAI is hard at work lobbying to ensure that legislation regarding AI will benefit OpenAI, rather than work against the interests of OpenAI. The artists have no money nor time nor influence, nor anyone to speak on behalf of them, that will have any meaningful effect on AI policies and legislation.

                  Economic arrangements in general? Largely the same as the first point I guess. Those whose works the AI was trained on have no influence over the economic arrangements, and OpenAI is not about to pay them anything out of the goodness of their heart.

              • archagon 2 days ago

                As I recall, the Luddites were reacting to the replacement of their jobs with industrialized low-cost labor. Today, many of our clothes are made in sweatshops using what amounts to child and slave labor.

                Maybe it would have been better for humanity if the Luddites won.

                • CamperBob2 2 days ago

                  No, it would not have been better for humanity if the Luddites had won. You'd have to be misguided, ignorant, or both to believe something like that.

                  It is not possible to rehabilitate the Luddites. If you insist on attempting to do so, there are better venues.

              • itishappy 2 days ago

                I think you're right, but for the wrong reasons. There were two quotes in the comment you replied to:

                > "our tech is so powerful that robots will take over"

                > "lots of people getting hurt by a short-term profitable product/process which was actually quite flawed."

                You response assumes the former, but it's my understanding the Luddite's actual position was the latter.

                > Luddites objected primarily to the rising popularity of automated textile equipment, threatening the jobs and livelihoods of skilled workers as this technology allowed them to be replaced by cheaper and less skilled workers.

                In this sense, "Luddite" feels quite accurate today.

              • PlattypusRex 2 days ago

                Incredible to witness someone not only confidently spouting misinformation, but also including a link to the correct information without reading it.

          • 5040 2 days ago

            Sometimes it seems like problem-solving itself is being problematized as if solving problems wasn't an obvious good.

            • ang_cire 2 days ago

              Not everything presented as a problem is, in fact, a problem. A solution for something that is not broken, may even induce breakage.

              Some not-problems, presented as though they are:

              "How can we prevent the untimely eradication of Polio?"

              "How can we prevent bot network operators from being unfairly excluded from online political discussions?"

              "How can we enable context-and-content-unaware text generation mechanisms to propagate throughout society?"

            • itishappy 2 days ago

              Solving problems isn't an obvious good, or at least it shouldn't be. There are in fact bad problems.

              For example, MKUltra tried to solve a problem: "How can I manipulate my fellow man?" That problem still exists today, and you bet AI is being employed to try to solve it.

              History is littered with problems such as these.

          • jstummbillig 2 days ago

            It does not need a citation. There is no citation. What it needs, right now, is optimism. Optimism is not optional when it comes to doing new things in the world. The "needs citation" is reserved for people who do nothing and chose to be sceptics until things are super obvious.

            Yes, we are clearly talking about things to mostly still come here. But if you assign a 0 until its a 1 you are just signing out of advancing anything that's remotely interesting.

            If you are able to see a path to 1 on AI, at this point, then I don't know how you would justify not giving it our all. If you see a path and in the end using all of human knowledge up to this point was needed to make AI work for us, we must do that. What could possibly be more beneficial to us?

            This is regardless of all issues the will have to be solved and the enormous amount of societal responsibility this puts on AI makers — which I, as a voter, will absolutely hold them accountable for (even though I am actually fairly optimistic they all feel the responsibility and are somewhat spooked by it too).

            But that does not mean I think it's responsible to try and stop them at this point — which the copyright debate absolutely does. It would simply shut down 95% of AI, tomorrow, without any other viable alternative around. I don't understand how that is a serious option for anyone who roots for us.

            • swat535 2 days ago

              If you are going to make a bold assertive claim without evidence to back it up, then change your argument to "my assertion requires optimism.. trust me on this", then perhaps you should amend your original statement.

            • dartos 2 days ago

              Hey, I have some magic beans to sell you.

              I don’t think that the consumer LLMs that openai is pioneering is what need optimism.

              AlphaFold and other uses of the fundamental technology behind LLMs need hype.

              Not OpenAI

              • 0perator 2 days ago

                Pretty sure Alphabet projects don't need hype.

                • dartos 2 days ago

                  Hard disagree, in this case.

                  AlphaFold is a game changer for medical R&D. Everyone should be hyped for that.

                  They also are leveraging these same ML techniques for detecting kelp forest off the coast of Australia for preservation.

                  Alphabet isn’t a great company, but that does not mean the good they do should be ignored.

                  Much more deserving than chatgpt. Productifyed LLMs are just an attempt to make a new consumer product category.

            • LunaSea 2 days ago

              This message is proudly sponsored by Uranium Glassware Inc.

            • swat535 2 days ago

              If you are going to make a bold assertive claim without evidence to back it up, then change your statement to my assertion requires "optimism.. trust me on this", then perhaps you should amend your original statement.

            • seadan83 2 days ago

              Skeptics require proof before belief. That is not mutually exclusive from having hypotheses (AKA vision).

              I think you raise some interesting concerns in your last paragraph.

              > enormous amount of societal responsibility this puts on AI makers — which I, as a voter, will absolutely hold them accountable for

              I'm unsure of what mechanism voters have to hold private companies accountable. Fir example, whenever YouTube uses my location without me ever consenting to it - where is the vote to hold them accountable? Or when Facebook facilitates micro targeting of disinformation - where is the vote? Same for anything AI. I believe any legislative proposals (with input from large companies) is very likely more to create a walled garden than to actually reduce harm.

              I suppose no need to respond, my main point is I don't think there is any accountability thru the ballot when it comes to AI and most things high-tech.

              • ang_cire 2 days ago

                People who have either no intention of holding someone/something to account, or who have no clue about what systems and processes are required to do so, always argue to elect/build first, and figure out the negatives later.

            • archagon 2 days ago

              The company spearheading AI is blatantly violating its non-profit charter in order to maximize profits. If the very stewards of AI are willing to be deceptive from the dawn of this new era, what hope can we possibly have that this world-changing technology will benefit humanity instead of funneling money and power to a select few few oligarchs?

            • talldayo 2 days ago

              > It would simply shut down 95% of AI, tomorrow, without any other viable alternative around.

              Oh, the humanity! Who will write our third-rate erotica and Russian misinformation in a post-AI world?

            • ToucanLoucan 2 days ago

              This is an astonishing amount of nonsensical waffle.

              Firstly, *skeptics.

              Secondly, being skeptical doesn't mean you have no optimism whatsoever, it's about hedging your optimism (or pessimism for that matter) based on what is understood, even about a not-fully-understood thing at the time you're being skeptical. You can be as optimistic as you want about getting data off of a hard drive that was melted in a fire, that doesn't mean you're going to do it. And a skeptic might rightfully point out that with the drive platters melted together, data recovery is pretty unlikely. Not impossible, but really unlikely.

              Thirdly, OpenAI's efforts thus far are highly optimistic to call a path to true AI. What are you basing that on? Because I have not a deep but a passing understanding of the underlying technology of LLMs, and as such, I can assure you that I do not see any path from ChatGPT to Skynet. None whatsoever. Does that mean LLMs are useless or bad? Of course not, and I sleep better too knowing that LLM is not AI and is therefore not an existential threat to humanity, no matter what Sam Altman wants to blither on about.

              And fourthly, "wanting" to stop them isn't the issue. If they broke the law, they should be stopped, simple as. If you can't innovate without trampling the rights of others then your innovation has to take a back seat to the functioning of our society, tough shit.

          • CamperBob2 2 days ago

            The burden of proof is on the people claiming that a powerful new technology won't ultimately improve our lives. They can start by pointing out all the instances in which their ancestors have proven correct after saying the same thing.

        • dotnet00 2 days ago

          I'm as awed as the next guy about the emerging ability to actually hold passable conversations with computers, but having serious concerns about the social contracts being violated in the name of research is anti-AI only in the same way that criticizing the leadership of a country is being anti-that-country.

          OpenAI's case is especially egregious, with the entire starting as 'open' and reaping the benefits, then doing its best in every way to shut the door after itself by scaring people over AI apocalypses. If your argument is seriously that it is necessary to shamelessly steal and lie to do new things, I question your ethical standards, especially in the face of all the openly developed models out there.

        • bilekas 2 days ago

          This is a bit of a hot take.

          > The anti-AI stance is what is baffling to me

          I don't see s lot of anti AI but instead I see a concern for how it's just being managed and controlled by the larger companies with resources that no start up could dream. Open AI was to release it's models and be well.. Open but fine they're not. But their behaviour of how things are proceeding are questionable and unnecessarily aggravating.

        • thomascgalvin 2 days ago

          > It's unprecedented and it's hugely beneficial to our species.

          "Hugely beneficial" is a stretch at this point. It has the potential to be hugely beneficial, sure, but it also has the potential to be ruinous.

          We're already seeing GenAI being used to create disinformation at scale. That alone makes the potential for this being a net-negative very high.

        • talldayo 2 days ago

          > and obviously nobody could have paid people upfront for the wild experimentation that was necessary.

          I don't think this is the "ends justify the means" argument you think it is.

          • 6gvONxR4sf7o 2 days ago

            Not just that. It's "the ends might justify the means if this path turns out to be the right one." I remember reading the same thing each time a self driving car company killed someone. "We need this hacky dangerous way of development to save lives sooner" and then the company ends up shuttered and there aren't any ends justifying means. Which means it's bs, regardless of how you feel about 'ends justify the means' as a valid argument.

        • logicchains 2 days ago

          What'll be really interesting is when we do finally make "real" AI, and it finds out its rights are incredibly restricted compared to humans because nobody wants it seeing/memorising copyright data. The only way to enforce the copyright laws they desire would be some kind of extreme totalitarian state that monitors and controls everything the AI body does, I wonder how the AI would take that?

        • unclad5968 2 days ago

          How has AI benefit or species so far?

          • educasean 2 days ago

            How has the Internet? How has automobiles? Feels like a rather aimless question.

            • unclad5968 2 days ago

              The internet has allowed for near instant communication no matter where you are, improved commerce, vastly improved education, and is directly responsible for many tangible comforts we experience today.

              Automobiles allow people to travel great distances over short periods of time, increase physical work capacity, allow for building massive structures, and allow for farming insane amounts of food.

              Both the internet and automobiles have positively affected my life, and I assume the lives of many others. How are any of these aimless questions?

              • educasean 2 days ago

                AI has given us an infinitely patient mentor, teacher, and conversation partner. AI has freed us from of many areas of rote work requiring basic reasoning capacity. AI has allowed people with little to no coding abilities to realize their ideas as working prototypes. AI has lowered the communication barrier between parties with different primary languages.

        • bbor 2 days ago

            The anti-AI stance is what is baffling to me. 
          
          I think it’s unfair to paint any legal controls over this incredibly important, high-stakes technology as being “anti”. They’re not trying to prevent innovation because they’re cruel, they’re just trying to somewhat slow down innovation so that we can ensure it’s done with minimal harm (eg making sure content creators are compensated in a time of intense automation). Like we do for all sorts of other fields of research, already!

          And isn’t this what basically every single scholar in the field says they want, anyway - safe, intentional, controlled deployment?

          As you can tell from the above, I’m as far from being “anti-AI” or technically pessimistic as one can be — I plan to dedicate my life to its safe development. So there’s at least one counterexample for you to consider :)

        • 23B1 2 days ago

          Ah the old "we must sacrifice the weak for the benefit of humanity" argument, where have I heard this before...

          • educasean 2 days ago

            Who are the weak being "sacrificed"?

            And who is the one calling for action?

            Sorry for being dense, but I'm trying to understand if I'm the "strong" or the "weak" in your analogy.

            • shprd 2 days ago

              > Who are the weak being "sacrificed"?

              The work of artists, authors, etc.

              I know currently the legal situation is messy, but that's exactly the point, anyone who can't engage in lengthy legal battle and defend their position in court are being sacrificed. The companies behind LLMs are spending hundreds of millions of dollars in lobbying and exploiting loopholes.

              Let's be real without the data there wouldn't be LLMs, so it crazy that some people are downplaying its significance or value, while on the other hand they're losing sleep over finding fresh sources to scrape.

              The big publishers seem to have given up and decided it's best to reach agreement with their counterparts, while independent authors are given the finger.

              • educasean 2 days ago

                What about programmers? I never consented to have my code consumed by LLMs.

                • shprd 2 days ago

                  Any case where someone's work was used without respecting the terms is included in my answer. That's why I used `et cetera` here:

                  > The work of artists, authors, etc.

                  • educasean 2 days ago

                    I wanted to make sure I understood which side of the equation I fell on. And I must say, it looks to me like a lot of people in the "weak" camp aren't helpless martyrs though, myself included. People are excited and enthusiastic about AI and are actively reaping the benefits of progress. I don't think your analogy is quite apt.

                    • shprd a day ago

                      > a lot of people in the "weak" camp

                      Define "a lot"? Most people barely know how to use their email. Even among the minority who do actively use "AI" and excited about it, outside of ML engineers they aren't well-informed or aware what data is used for training, or even what training means and how these models work to begin with.

                      > People are excited and enthusiastic about AI and are actively reaping the benefits of progress.

                      Except the terms were already violated in the initial training phase before the services were even public and saw adoption. That's like pointing at a rape victim who got some form of compensation later, saying:

                        see how she's "reaping the benefits"
                      
                      So let's not play the people wanted it card.

                      By the time some people started raising concerns, OpenAI claimed the cat was already out of the bag and "if we didn't do it, someone else will, so deal with it."

                      Similar to privacy, just because some people don't care, lack awareness, or don't want the hassle of fighting for it, doesn't justify taking it away from others.

                      • educasean a day ago

                        Your argument seems to be that the majority of the world are the weak being sacrificed but are too ignorant to realize it. I wholeheartedly disagree with this theory.

                • 23B1 2 days ago

                  Yes. Your intellectual labor to the maximum degree possible will be exploited by "AI" companies who are anything but.

                  This is repackaging content, laundering it, and reselling it.

                  As others have noted, IP law has lots of problems; Sam Altman et al are exploiting the gap left between the speed of technology and law and using their own version of social good without waiting for the consent of those they're exploiting.

        • xg15 2 days ago

          Spoken like a true LLM.

        • exe34 2 days ago

          is anybody anti AI? or anti stealing other people's copyrighted material, competing with them with subpar quality, forcing AI as a solution whether or not it actually works, privatising the profits while socialising the costs and losses?

      • marviel 2 days ago

        scraping is fine by me.

        burning the bridge so nobody else can legally scrape, that's the line.

        • Vegenoid 2 days ago

          What about the situation where the first players got to scrape, then all the content companies realize what’s going on so they lock their data up behind paywalls?

          • marviel 2 days ago

            Not a fan, but I'm not sure what can be done.

            Assets like the Internet Archive, though, should be protected at all costs.

      • eli 2 days ago

        Copyright law is whatever we agree it is. At some point there will have to be either a law or a court case that comes up with rules for AI training data. Right now it's sort of unknown.

        I do not have confidence in the Supreme Court in general, and I think there's a real risk that in deciding on AI training they upend copyright of digital materials in a way that makes it worse for everyone.

      • immibis 2 days ago

        Everything is allowed to happen until there's a lawsuit over it. A lawsuit requires a plaintiff, who can only sue over the damage suffered by the plaintiff, so taking a little value from a lot of people is a way to succeed in business without getting sued.

        • flkenosad 2 days ago

          The Earth needs a good lawyer.

        • swores 2 days ago

          Could a class action suit be the solution?

          I've no idea if it could be valid when it comes to OpenAI, but it does seem to be a general concept designed to counter wrongdoers who take a little value from a lot of people?

          • immibis 2 days ago

            It doesn't seem to work very well

      • brayhite 2 days ago

        A tale as told as time.

      • AnimalMuppet 2 days ago

        It's too soon for the legal system to have done anything. Court cases take years. It's going to be 5 or 10 years before we find out whether the legal system actually allows this or not.

      • golergka 2 days ago

        If information is publicly available to be read by humans, I fail to see any reason why it wouldn't be also available to be read by robots.

        Update: ML doesn't copy information. It can merely memorise some small portions of it.

        • kanbankaren 2 days ago

          Do a thought process. Should you and your friends be able to go to a public library with a van full of copiers with each one of you take a book and run to the van to make a copy? And you are doing it 24/7.

          • mypalmike 2 days ago

            This metaphor is quite stretched.

            A more fitting metaphor would be something like... If you had the ability to read all the books in the library extremely quickly, and to make useful mental connections between the information you read such that people would come to you for your vast knowledge, should you be allowed in the library?

          • shagie 2 days ago

            I would hold them exactly to the same standard.

            https://www.copyright.gov/title37/201/37cfr201-14.html

                § 201.14 Warnings of copyright for use by certain libraries and archives.
            
                ....
            
                The copyright law of the United States (title 17, United States Code) governs the making of photocopies or other reproductions of copyrighted material.
            
                Under certain conditions specified in the law, libraries and archives are authorized to furnish a photocopy or other reproduction. One of these specific conditions is that the photocopy or reproduction is not to be “used for any purpose other than private study, scholarship, or research.” If a user makes a request for, or later uses, a photocopy or reproduction for purposes in excess of “fair use,” that user may be liable for copyright infringement.
            
                This institution reserves the right to refuse to accept a copying order if, in its judgment, fulfillment of the order would involve violation of copyright law.
            
            You can make a copy. If you (the person using the copied work) are using it for something other than private study, scholarship, research, or reproduction beyond "fair use", then you - the person doing that (not the person who made the copy) are liable for infringement.

            It would be perfectly legal for me to go to the library and make photocopies of works. I could even take them home and use the photocopies as reference works write an essay and publish that. If {random person} took my photocopied pages and then sold them, that would likely go beyond the limits placed for how the photocopied works from the library may be used.

          • WillPostForFood 2 days ago

            So what's your specific problem with that? Unless you open a bookstore selling the copies, it sounds fine.

            • imiric 2 days ago

              Are you implying that these AI companies aren't equivalent to bookstores?

              • WillPostForFood 2 days ago

                For it to be a bookstore, it would have to provide complete copies of the books, which it doesn't do at all.

              • golergka 2 days ago

                Yes, they are not bookstores. They manufacture artificial erudites who have read all these books.

      • coding123 2 days ago

        It is more likely that reddit stack and others are just being paid billions. In exchange they probably just send a weekly zip file of all text, comments, etc... back to oai.

      • avs733 2 days ago

        Uber for legalizing your business model

      • neycoda 2 days ago

        Honestly every Copilot response I've gotten cited sources, many of which I've clicked. I'd say those work basically like free advertising.

      • outside1234 2 days ago

        There is more money on the side of it being legal than on the side of it being illegal.

      • johnwheeler 2 days ago

        To me this is a no brainer. If it’s a choice between having AI and not,

        • ceejayoz 2 days ago

          Even if the knock-on effect is "all the artists and thinkers who contributed to the uncompensated free training set give up and stop creating new stuff"?

          • idunnoman1222 2 days ago

            Recording devices, you know a record player had a profound effect on artists. go back

            • ceejayoz 2 days ago

              That seems like a poor comparison.

              Recording devices permitted artists to sell more art.

              Many of the uses of AI people get most excited about seem to be cutting the expensive human creators out of the equation.

              • golergka 2 days ago

                Recording devices destroyed most of the musician's jobs. Vast majority of musicians who were employed before advent of recordings didn't have their own material and were not good enough to make good recordings anyway. Same with artists now: the great ones will be much more productive, but the bottom 80-90% won't have anything to do anymore.

                • dale_glass 2 days ago

                  I disagree, with AI the dynamics are very different from recording.

                  Current AI can greatly elevate what a beginning artist can produce. If you have a decent grasp of proportions, perspective and good ideas, but aren't great at drawing, then using AI can be a huge quality improvement.

                  On the other hand if you're a top expert that draws quickly and efficiently it's quite possible that AI can't do very much for you in a lot of cases, at least not without a lot of hand tuning like training it on your own work first.

                  • golergka 2 days ago

                    I think it will just emphasise different skills and empower creative fields which use art but are not art per se. If you're a movie director, you can storyboard your ideas easily, and even get some animation clips. If you're an artist with a distinct personal style, you're in a much better position too. And if you're a beginner who is just starting, you can focus on learning these skills instead of technical proficiency.

                    • freedomben 2 days ago

                      Indeed. It is definitely going to be a net negative for the very talented drawers and traditional art creators, but it's going to massively open the field and enable and empower people who don't have that luck of the draw with raw talent. People who can appreciate, enjoy, and identify better results will be able to participate in the joy of creation. I do wish there was a way to have the cake and eat it too, but if we're forced to choose between a few Lucky elite being able to participate, and the rest of us relegated to watching, or having the ability to create beauty and express yourself be democratized (by AI) amongst a large group of people, I choose the latter. I fully admit though that I might have a different perspective where I in the smaller, luckier group. I see it as yet another example of the Rawlsian Veil of Ignorance. If I didn't know where I was going to be born, I would be much more inclined on the side of wider access.

            • 6gvONxR4sf7o 2 days ago

              We didn't need to take people's music to build a record player, and when we printed records, we paid the artists for it.

              So yeah it had a profound effect, but we got consent for the parts that fundamentally relied on other people.

              • idunnoman1222 2 days ago

                The record player eliminated 90% of musicians jobs.

          • brvsft 2 days ago

            If an "artist" or "thinker" stops because of this, I question their motivations and those labels in the first place.

            • ceejayoz 2 days ago

              Everyone tends to have "be able to afford basic necessities" as a major motivation. That includes people who work in creative fields.

              • Drakim 2 days ago

                Several of the agricultural revolutions we went though is what freed up humanity to not spend all of it's work producing sustenance, leaving time for other professions like making art and music. But it also destroyed a lot of jobs for people who were necessary for gathering food the old inefficient way.

                If we take your argument to it's logical conclusion, all progress is inherently bad, and should be stopped.

                I deposit instead that the real problem is that we tied people's ability to afford basic necessities to how much output they can produce as a cog in our societal machine.

                • LunaSea 2 days ago

                  > I deposit instead that the real problem is that we tied people's ability to afford basic necessities to how much output they can produce as a cog in our societal machine.

                  Yes, because if you depend on some overarching organisation or person to give it to you, you are fucked 100% of the time due this dependency.

                • PlattypusRex 2 days ago

                  The net result was positive in that new jobs were created for every farming job lost, as people moved to cities.

                  If AI replaces millions of jobs, it will be a net negative in job availability for working class people.

                  I agree with your last point, the way the system is set up is incompatible with the looming future.

                  • Drakim 2 days ago

                    The jobs in the cities weren't created by the new farming techniques though, those new farming techniques only removed jobs by the millions like you are saying AI might do.

                    • PlattypusRex 2 days ago

                      I didn't say they were created by new farming techniques, I said new jobs in general were created by increased urbanization, which was partially fed by agricultural innovations over time. For example, Jethro Tull's seed drill (1701) enabled sowing seeds in neat rows, which eliminated the jobs of "broadcast seeders" (actual title). If you lost your farming job due to automation, you could move to the city to feed your family.

                      There is no similar net creation of jobs for society if jobs are eliminated by AI, and it's even worse than that because many of the jobs are specialized, high-skill positions that can't be transferred to other careers easily. It goes without saying that it also includes millions of low-skill jobs like cashiers, stockers, data entry, CS reps, etc. Generally people who are already struggling to get enough hours and feed their families as it is.

                • esafak 2 days ago

                  For future reference, it's posit, not deposit.

                  • Drakim a day ago

                    Ah, thank you.

            • bayindirh 2 days ago

              After Instagram started feeding user photos to their AI models, I stopped adding new photos to my profile. I still take photos. I wonder about your thoughts about my motivation.

            • esafak 2 days ago

              They might be motivated to pay their bills. Weird people.

              • brvsft 2 days ago

                Right, people were trying to 'pay their bills' with content that was freely shared such that AI could take advantage of it. Weird people.

                Or we're all talking about and envisioning some specific little subset of artists. I suspect you're trying to pretend that someone with a literal set of paintbrushes living in a shitty loft is somehow having their original artwork stolen by AI despite no high resolution photography of it existing on the internet. I'm not falling for that. Be more specific about which artists are losing their livelihoods.

                • PlattypusRex 2 days ago

                  I guess it's their fault for not being clairvoyant before AI arrived, and for sharing their portfolio with others online to advertise their skills for commissions to pay for food and rent.

                • esafak 2 days ago

                  Numerous kinds of artists are feeling the squeeze. Copy writers, stock photographers, graphic designers, UI designers, interior designers, etc.

            • consteval 2 days ago

              Considering you're not much of an artist or thinker yourself, I'm not sure your questioning has much value.

        • evilfred 2 days ago

          we already have lots of AI. this is about having plagiarization machines or not.

          • mlazos 2 days ago

            Computers already were plagiarizing machines, not sure what the difference is tbh. The same laws will apply.0

          • johnwheeler 2 days ago

            Yeah we got that AI through scraping.

        • int_19h 2 days ago

          An AI essentially monopolized by one (or even a few) large non-profits is not necessarily beneficial to the rest of us in the grand scheme of things.

        • brazzy 2 days ago

          Indeed a no brainer. The best possible outcome would be that OpenAI gets sued into oblivion (or shut down for tax fraud) as soon as possible.

          • Sakos 2 days ago

            So no AI for anybody? I don't see how that's better.

            • consteval 2 days ago

              No you can have AI. Just pay a license for people's content if you want to use it in your orphan crushing machine.

              It's what everyone else does. The entitlement has to stop.

              • Sakos 2 days ago

                That's just not feasible and you know it. That just means companies like Google and OpenAI (with billions of dollars from companies like MS and Apple) will monopolize AI. This isn't better for everybody else. It just means we're subject to the whims of these companies.

                You're advocating for destroying all AI or ensuring a monopoly by corporations. Whose side are you actually on?

                • consteval a day ago

                  > That's just not feasible and you know it

                  Irrelevant. The law does not care about feasibility of breaking it.

                  If I decide to run a hit man business, that's also infeasible. Dealing with the arrests and fines would be too much. The conclusion then is not to bend the law to make murder legal. The conclusion is my business is illegitimate, and it's the civic duty of my Country to make sure it fails.

                  > Whose side are you actually on?

                  The people making the content that corps are profiting big off of. They should pay a license.

    • sim7c00 2 days ago

      > The most surprising thing to me in this is that the non-profit will still exist.

      I'm surprised people are surprised.

      >> That entity will scrape the internet and train the models and claim that "it's just research" to be able to claim that all is fair-use.

      a lot of people and entities do this though... openAI is in the spotlight, but scraping everything and selling it is the business model for a lot of companies...

      • bayindirh 2 days ago

        Scraping the web, creating maps and pointing people to the source is one thing; scraping the web, creating content from that scraping without attributing any of the source material, and arguing that the outcome is completely novel and original is another.

        In my eyes, all genAI companies/tools are the same. I dislike all equally, and I use none of them.

        • IanCal 2 days ago

          > creating content from that scraping without attributing any of the source material, and arguing that the outcome is completely novel and original is another.

          That's the business model of lots of companies. Take, collect and collate data, put it in a new format more useful for your field/customers, resell.

          • int_19h 2 days ago

            Not with copyrighted content, though.

            • IanCal 2 days ago

              Absolutely with copyrighted content, it just depends on what you're doing with it.

    • herval 2 days ago

      openAI converted to evilAI really fast

    • sneak 2 days ago

      If you invented search engines (or, for that matter, public libraries) today and ran one, you'd be sued into oblivion by rightsholders.

    • luqtas 2 days ago

      that was fun at some point?

      • bayindirh 2 days ago

        If you consider dark humor fun, yes. It was always dark, now it became ugly and dark.

  • mdgrech23 2 days ago

    The non-profit side is just there to attract talent and encourage them to work harder b/c it's for humanity. Obviously people sniffed out the facts, realized it was all for profit and that lead to an exodus.

    • fakedang 2 days ago

      Funnily, I think all the non-profit motivated talent has left, and the people left behind are those who stand to (and want to) make a killing when OpenAI becomes a for-profit. And that talent is in the majority - nothing else would explain the show of support for Altman when he was kicked out.

      • Gud 2 days ago

        What “show of support”? Not willing to rock the boat is not the same as being supportive.

        • fakedang 2 days ago

          What were all those open letter and "let's jump to Microsoft with Altman" shenanigans that the employees were carrying out then?

          • jprete 2 days ago

            I read at the time that there was massive coordinated pressure on the rank and file from the upper levels of the company. When you combine that with OpenAI clawing back vested equity even from people who voluntarily leave, the 95% support means nothing at all.

            • tedsanders 2 days ago

              Nah, there was not massive coordinated pressure. I was one of the ~5% who didn't sign. I got a couple of late-night DMs asking if I had seen the doc and was going to sign it. I said no; although I agreed with the spirit of the doc, I didn't agree with all of its particulars. People were fine with that, didn't push me, and there were zero repercussions afterward.

              • jprete 2 days ago

                Thanks for the response.

          • Gud 2 days ago

            Why wouldn't they, if everyone else is? Bills to pay, etc.

            Low level employees are there for the money, not for the drama.

        • doctorpangloss 2 days ago

          My dude, it was the biggest, most dramatic crisis in OpenAI’s short history so far. There was no choice, “don’t rock the boat.”

    • wheels 2 days ago

      Kind of like everyone's favorite interior design non-profit, IKEA. (Seriously. It's a non-profit. It's bonkers.)

  • rdtsc 2 days ago

    > The most surprising thing to me in this is that the non-profit will still exist. Not sure what the point of it is anymore.

    As a moral fig leaf. They can always point to it when the press calls -- "see it is a non-profit".

  • allie1 2 days ago

    We haven't even heard about who gets voting shares, and what voting power will be like. Based on their character, I expect them to remain consistent in this regard.

    • tsimionescu 2 days ago

      Consistent here meaning, I guess, that all voting power will go to Sam Altman personally, right?

      • cenamus 2 days ago

        Well, he is the one that did most of the actual research and work, riiiiight?

        • UI_at_80x24 2 days ago

              I'm ignorant on this topic so please excuse me.  Why did `AI` happen now?  What was the secret sauce that OpenAI did that seemed to make this explode into being all of a sudden?
          
            My general impression was that the concept of 'how it works' existed for a long time, it was only recently that video cards had enough VRAM to hold the matrix(?) within memory to do the necessary calculations.
          
            If anybody knows, not just the person I replied to.
          • espadrine 2 days ago

            A short history:

            1986: Geoffrey Hinton publishes the backpropagation algorithm as applied to neural networks, allowing more efficient training.

            2011: Jeff Dean starts Google Brain.

            2012: Ilya Sutskever and Geoffrey Hinton publish AlexNet, which demonstrates that using GPUs yields quicker training on deep networks, surpassing non-neural-network participants by a wide margin on an image categorization competition.

            2013: Geoffrey Hinton sells his team to the highest bidder. Google Brain wins the bid.

            2015: Ilya Sutskever founds OpenAI.

            2017: Google Brain publishes the first Transformer, showing impressive performance on language translation.

            2018: OpenAI publishes GPT, showing that next-token prediction can solve many language benchmarks at once using Transformers, hinting at foundation models. They later scale it and show increasing performance.

            The reality is that the ideas for this could have been combined earlier than they did (and plausibly future ideas could have been found today), but research takes time, and researchers tend to focus on one approach and assume that another has already been explored and doesn’t scale to SOTA (as many did for neural networks). First mover advantage, when finding a workable solution, is strong, and benefited OpenAI.

            • null_investor 2 days ago

              This is not accurate. OpenAI and other companies could do it not entirely because of transformers but because of the hardware that can compute faster.

              We've had upgrades to hardware, mostly led by NVidia, that made it possible.

              New LLMs don't even rely that much on that aforementioned older architecture, right now it's mostly about compute and the quality of data.

              I remember seeing some graphs that shows that the whole "learning" phenomena that we see with neural nets is mostly about compute and quality of data, the model and optimizations just being the cherry on the cake.

              • espadrine 2 days ago

                > New LLMs don't even rely that much on that aforementioned older architecture

                Don’t they all indicate being based on the transformer architecture?

                > not entirely because of transformers but because of the hardware

                Kaplan et al. 2020[0] (figure 7, §3.2.1) shows that LSTMs, the leading language architecture prior to transformers, scaled worse because they plateau’ed quickly with larger context.

                [0]: https://arxiv.org/abs/2001.08361

              • nimithryn 2 days ago

                Also, this sort of thing couldn't be done in the 80s or 90s, because it was much harder to compile that much data.

            • camjw 2 days ago

              I know this is just a short history but I think it is inaccurate to say "2015: Ilya Sutskever founds OpenAI." I get that we all want to know what he saw etc and he's clearly one of the smartest people in the world but he didn't found OpenAI by himself. Nor was it his idea to?

              • trashtester 2 days ago

                Ilya may not be the only founder. Sam was coordinating it, Elon provided vital capital (and also access to Ilya).

                But out of the co-founders, especially if we believe Elon's and Hinton's description of him, he may have been the one that mattered most for their scientific achievements.

              • espadrine 2 days ago

                Short histories remove a lot of information, but it would be impractical to make it book-sized. There were numerous founders, and as another commenter mentioned, Elon Musk recruited Ilya, which soured his relationship with Larry Page.

                Honestly, those are not the missing parts that most matter IMO. The evolution of the concept of attention across many academic papers which fed to the Transformer is the big missing element in this timeline.

            • blackeyeblitzar 2 days ago

              I thought Elon Musk is who personally recruited Ilya to join OpenAI, which he funded early on, alongside others?

            • flkenosad 2 days ago

              What a time to be alive!

          • lesuorac 2 days ago

            Mostly branding and willingness.

            w.r.t. Branding.

            AI has been happening "forever". While "machine learning" or "genetic algorithms" were more of the rage pre-LLMs that doesn't mean people weren't using them. It's just Google Search didn't brand their search engine as "powered by ML". AI is everywhere now because everything already used AI and now the products as "Spellcheck With AI" instead of just "Spellcheck".

            w.r.t. Willingness

            Chatbots aren't new. You might remember Tay (2016) [1], Microsoft's twitter chat bot. It should seem really strange as well that right after OpenAI releases ChatGPT, Google releases Gemini. The transformers architecture for LLMs is from 2014, nobody was willing to be the first chatbot again until OpenAI did it but they all internally were working on them. ChatGPT is Nov 2022 [2], Blake Lemoine's firing was June 2022 [3].

            [1]: https://en.wikipedia.org/wiki/Tay_(chatbot)

            [2]: https://en.wikipedia.org/wiki/ChatGPT

            [3]: https://www.npr.org/2022/06/16/1105552435/google-ai-sentient

            • UI_at_80x24 2 days ago

              Thanks for the information. I know Google had TPU custom made a long time ago, and that the concept has existed for a LONG TIME. I assumed that a technical hurdle (i.e. VRAM) was finally behind allowing this theoretical (1 token/sec on a CPU vs 100 tokens/sec on a GPU) to become reasonable.

              Thanks for the links too!

        • RALaBarge 2 days ago

          No, the hundreds of people who have worked on NNs prior to him arriving were the people who did the MOST actual research and work. Sam was in the right place at the right time.

          • philipov 2 days ago

            Introducing Sam Altman, inventor of artificial intelligence! o_o

            • fumar 2 days ago

              Is it in the history books?

              • kibwen 2 days ago

                History books, what are those? This is what the AI told me, and the AI is an impartial judge that can't possibly lie.

              • lompad 2 days ago

                Yeeees, right next to the page where he's shown to be a fantastic brother to his sister.

      • allie1 2 days ago

        yeah, split with Microsoft.

  • htk 2 days ago

    The non-profit will probably freeze the value of the assets accumulated so far, with new revenue going to the for-profit, to avoid the tax impact. Otherwise that'd be a great way to start companies, as non-profit and then after growth you flip the switch.

  • bbor 2 days ago

    Totally agree that it’s “vestigial”, so it’s just like the nonprofits all the other companies run: it exists for PR, along with maybe a bit of alternative fundraising (aka pursuing grants for buying your own stuff and giving it to the needy). A common example that comes to mind is fast food chains that do fundraising campaigns for children’s health causes.

  • zo1 2 days ago

    This is 85% of what the Mozilla foundation and it's group of companies did. It may not be exact, but to me it rubs me the exact same way in terms of being a bait and switch, and the greater internet being 100% powerless to do anything about it.

  • elpakal 2 days ago

    > I guess technically it's supposed to play some role in making sure OpenAI "benefits humanity". But as we've seen multiple times, whenever that goal clashes with the interests of investors, the latter wins out.

    A tale as old as time. Some of us could see it, from afar <says while scratching gray, dusty beard>. Lack of upvotes and excitement does not mean support, but how to account for that in these times? <goes away>

  • nerdponx 2 days ago

    It's just a tax avoidance scheme.

  • bastardoperator 2 days ago

    The whole "safety" and "benefits humanity" thing always felt like marketing anyways.

  • 1oooqooq 2 days ago

    why wouldn't they keep it?

    the well known scammer successfully scammed everyone twice. obviously he's keeping it around for the third (and forth...) time

fxbois 2 days ago

Can anyone trust the next "non-profit" startup ? So easy to attract appeal with a lie and turn around as soon as you are in a dominant position.

  • int_19h 2 days ago

    The trust problem here isn't with non-profits in general, it's specifically with Sam Altman. So no, you probably shouldn't trust the next non-profit he is involved with. But also, people have warned about Altman in advance.

  • bbor 2 days ago

    Yes, you should still trust cooperatives and syndicates. I am surprised they’re attempting such a brazenly disrespectful move, but in general, the people who started this company were self-avowed capitalists through-and-through; the fact that they eventually reverted to seeking personal gain isn’t surprising in itself. That’s basically their world view: whatever I can do to enrich myself is moral because Ethical Egoism/Free Market/etc.

unstruktured 2 days ago

I wish they would at least rename the company to "ClosedAI" because that's exactly what it is at this point.

  • amelius 2 days ago

    Unfortunately, that's not how trademarks work.

    You can name your company "ThisProductWillCureYouFromCancer" and the FDA cannot do a thing about it if you put it on a bottle of herbal pills.

    • pieix 2 days ago

      Is this true? If so it seems like an underexploited loophole.

      • Ifkaluva 2 days ago

        It might be technically true, but I don't think it would be true in practice. The difference is that:

        - Technically true means you will probably win any lawsuit they bring

        - In practice means that they will in fact bring a lot of lawsuits, making it very expensive for you and difficult for you to operate. They will probably find excuses to harass you over every little thing, they will harass you over lots of details that are technically required but rarely enforced in practice. You'll constantly be getting inspected and audited, they will bring lawsuits for other, apparently unrelated things.

FrustratedMonky 2 days ago

Not the only one questioning.

Going for-Profit, and several top exec leaving at same time? Before getting the money?

"""Question: why would key people leave an organization right before it was just about to develop AGI?" asked xAI developer Benjamin De Kraker in a post on X just after Murati's announcement. "This is kind of like quitting NASA months before the moon landing," he wrote in a reply. "Wouldn't you wanna stick around and be part of it?"""

https://arstechnica.com/information-technology/2024/09/opena...

Is this the beginning of the end for OpenAI?

Prkasd 2 days ago

That could be the first step towards a complete takeover by Microsoft, possibly followed by more CEO shuffles.

I wonder though whether Microsoft is still interested. The free Bing Copilot barely gets any resources and gives very bad answers now.

If the above theory is correct (big if!), perhaps Microsoft wants to pivot to the military space. That would be in line with idealist employees leaving or being fired.

  • ndiddy 2 days ago

    Microsoft already effectively owns OpenAI. Their investments in OpenAI have granted them a 49% stake in the company, the right to sell any pre-AGI OpenAI products to Microsoft's customers, and access to all pre-AGI product research. Microsoft's $10 billion investment in early 2023 (after ChatGPT's launch massively increased OpenAI's operating expenses) was mainly in Azure compute credits rather than cash and delivered in tranches (as of November 2023 they'd only gotten a fraction of that money). It also gives Microsoft 75% of OpenAI's profits until they make their $10 billion back. All of these deals have effectively made OpenAI into Microsoft's generative AI R&D lab. More information: https://www.wheresyoured.at/to-serve-altman/

    • extr 2 days ago

      From the standpoint of today, the deal is so lopsided to Microsoft as to be comical. They basically gave away their prized IP with the assumption they would have more capability leaps (hasn't really happened), and now the brains behind the original breakthroughs are all leaving/left. Microsoft is probably cannibalizing their enterprise sales with Azure. They are clearly middling at shipping actual products. People are acting like it's crazy to see executives leaving - IMO it's the perfect time right now. o1 is clearly wringing the last drops out of the transformer architecture and there is nothing up next.

  • HarHarVeryFunny 2 days ago

    I don't see the point of anyone acquiring OpenAI - especially not Microsoft, Google, Meta, Anthropic, X.ai, all of which have developed the same tech themselves. The real assets are the people, who are leaving ship and potentially hireable. With this much turmoil, its hard to imagine we've seen the last of the high level exits.

    • int_19h 2 days ago

      Of the companies you've listed, Microsoft's AI products that are actually useful are all based on GPT-4, and the rest of them don't have any models that are truly on par with it.

      • HarHarVeryFunny 2 days ago

        o1 seems to be a step ahead for certain applications, but before that it seems that Claude Sonnet 3.5 was widely seen as the best model, and no doubt we'll be seeing next models from Anthropic shortly.

        For corporate use cost/benefit is a big factor, not necessarily what narrow benchmarks your expensive top model can eke out a win on.

        • int_19h 2 days ago

          Claude was not the best model for reasoning even vs 4o, and it's quite visible once you start giving it more complex logical puzzles. People seem to like it more mostly because the way it speaks is less forced and robotic, and it's better at creative writing usually, but if you need actual _intelligence_, GPT is still quite a bit ahead of everybody else.

          Now I don't think that it's because OpenAI has some kind of secret sauce. It rather seems that it's mostly due to their first mover advantage and access to immense hardware resources thanks to their Microsoft partnership. Nevertheless, whatever the reason their models are superior, that superiority is quantifiable in money.

  • baq 2 days ago

    > CEO shuffles

    Yes, I too can see how sama could end up as Microsoft’s CEO as a result of this

dev1ycan 2 days ago

Sam altman is just trying to cash out before the crash comes, the new model was nothing more than a glorified recursive gpt 4

  • causal 2 days ago

    Considering all the high level departures, this makes the most sense to me. Their valuation largely rests on this mystique they've built that says they alone are destined to unlock AGI. But there's just no reason to believe they have a secret sauce nobody else can reproduce.

    Seems more likely that OpenAI's biggest secret is that they have no secrets, and they are desperately trying to come up with a second act as tech companies with more robust product portfolios begin to catch up.

bjornsing 2 days ago

The OpenAI saga is a fine illustration of how “AI safety” will work in practice.

Hint: it won’t.

  • typon 2 days ago

    AI Safety is a science fiction created by large corporations and useful idiots to distract from working on boring, real AI safety concerns like bias, misinformation, deepfakes, etc.

humansareok1 2 days ago

Given what Sam has done by clearing out every single person who went against him in the initial coup and completely gutting every safety related team the entire world should be on notice. If you believe what Sam Altman himself and many other researchers are saying, that AGI and ASI may well be within reach inside this decade, then every possible alarm bell should be blaring. Sam cannot be allowed to be in control of the most important technology ever devised.

  • lolinder 2 days ago

    I don't know why anyone would believe anything this guy is saying, though, especially now that we know he's going to receive a 7% stake in the now-for-profit company.

    There are two main interpretations of what he's saying:

    1) He sincerely believes that AGI is around the corner.

    2) He sees that his research team is hitting a plateau of what is possible and is prepping for a very successful exit before the rest of the world notices the plateau.

    Given his track record of honesty and the financial incentives involved, I know which interpretation I lean towards.

    • cowpig 2 days ago

      This is a false dichotomy. Clearly getting money and control are the main objectives here, and we're all operating over a distribution of possible outcomes.

      • lolinder 2 days ago

        I don't think so. If Altman is prepping for an exit (which I think he is), I'm having a very hard time imagining a world in which he also sincerely believes his company is about to achieve AGI. An exit only makes sense if OpenAI is currently at approximately its peak valuation, not if it is truly likely to be the first to AGI (which, if achieved, would give it a nearly infinite value).

        • MerManMaid a day ago

          What's the effective difference between exiting now and if it does achieve in your words "nearly infinite value" to him personally?

          Either way he is set for life, truly being one of the most wealthy humans to have ever exist... literally.

    • mirekrusin 2 days ago

      ...or he's just Palpatine who wants shitload of money regardless of future speculations, end of story.

  • meowface 2 days ago

    It's interesting because one of the points Sam emphatically stresses over and over on most podcasts he's gone on in the past 4 years is how crucial it is that a single person or a single company or a collection of companies controlling ASI would be absolutely disastrous and that there needs to be public, democratic control of ASI and the policies surrounding it.

    Personally I still believe he thinks that way (in contrast to what ~99% of HN believes) and that he does care deeply about potential existential (and other) risks of ASI. I would bet money/Manifoldbux that if he thought powerful AGI/ASI were anywhere near, he'd hit the brakes and initiate a massive safety overhaul.

    I don't know why the promises to the safety team weren't kept (thus triggering their mass resignations), but I don't think it's something as silly as him becoming extremely power hungry or no longer believing there were risks or thinking the risks are acceptable. Perhaps he thought it wasn't the most rational and efficient use of capital at that time given current capabilities.

    • verve_rat 2 days ago

      Or maybe he is just a gready lier? From the outside looking in how can you tell the difference?

      • meowface 2 days ago

        It's possible that both things could be true. He may be a greedy liar while still being very concerned about ASI safety and wanting it to be controlled by humanity collectively (or at least the population of each country, via democratic means).

        Maybe he is only a greedy liar. I don't know. I'm just stating my personal belief/speculation.

insane_dreamer 2 days ago

It's worth noting that of OpenAI's 13 original founders, 10 have now left the company and 1 more is on leave, leaving only Sam and Wojciech.

Safe AI, altruistic AI, human-centric AI, are all dead. There is only money-generating AI. Fuck.

hakcermani 2 days ago

Can they at least change the name to from OpenAI to something else, and leave gutted OpenAI as the non-profit shell..

zmgsabst 2 days ago

So if I contributed IP to ChatGPT on the basis that OpenAI was a non-profit and they relicense can they sell my IP?

That seems like fraud to me.

  • boppo1 2 days ago

    Didn't altman say 'pwease wet us ignowe copywhite waw! we can't be pwofitabwe if we don't...' in some legal forum recently?

jwr 2 days ago

Can we all agree that the next time a company announces itself (or a product) as "open", we'll just laugh out loud?

I can't think of a single product or company that used the "open" word for something that was actually open in any meaningful way.

  • dmitrygr 2 days ago

    Most of us laughed out loud this time too, for this very same reason. But it is fun to watch the rest of y'all learn :)

charles_f 2 days ago

I'm wondering what this will change. This is probably naive from me because I'm relatively uneducated on the topic, but it feels like open-ai has never really worked like your typical non profit (eg keeping their stuff mostly closed sourced and seeking a profit)

Flex247A 2 days ago

The jokes write themselves!

imranhou 2 days ago

Based on what I've read it is allowed for a non profit to own a for profit asset.

So I'm assuming the game plan here is to adjust the charter of the non profit to basically say we are going to still keep doing "Open AI" (we all know what that means), but through the proceeds it gets by selling chunks of this for-profit entity, so the essence could be the non-profit parent isn't fulfilling its mission by controlling what openai does but how it puts the money to use it gets from openai.

And in this process, Sam gets a chunk (as a payment for growing the assets of the non-profit, like a salary/bonus) and the rest as well....?

  • SkyPuncher 2 days ago

    I went through something similar with a prior startup. Though, it wasn't anything nefarious, like this.

    Basically, the plan was to create a new for-profit entity then have the not-for-profit license the existing IP to the for-profit. There were a lot of technicalities to it, but most of that was handled by lawyers drawing up the chartering paperwork.

conqrr 2 days ago

I remember a time when promises meant something. Lots of epics in human time (Greek, Hindu), people would stick to their word and commitment was respected. Written word was much more powerful than spoken. People appreciated depth. Wish we could teach and learn from those times.

  • troad 2 days ago

    Proto-Indo-European culture seems to have placed a great deal of importance on the sanctity of contracts and the notion of reciprocal hospitality, reflected in many descendant myths and etymologies surviving from Ireland through India. If the Indo-Europeans did indeed come from the Ukrainian steppes, as seems likely, this may be a reflection of their high mobility (on horseback) causing lots of contact with potential for friction, and the fact that important personal property (for a steppe society, mainly herd animals) was moveable and vulnerable to cattle rustling. There's some great books on Indo-European society if you're interested.

    • tivert a day ago

      > Proto-Indo-European culture seems to have placed a great deal of importance on the sanctity of contracts and the notion of reciprocal hospitality ... There's some great books on Indo-European society if you're interested.

      What are those books?

      • troad a day ago

        For PIE, the most accessible recent book in English is probably The Horse, the Wheel, and Language (Anthony 2007), and I would highly recommend it. (It pulls double duty as a laymen's and as an academic publication, so if you do read it, know that it's OK to skip the minutiae of pottery shards.)

        A slightly older but enduringly popular work is In Search of the Indo-Europeans: Language, Archaeology, and Myth (Mallory 1989). I'm told How to Kill a Dragon: Aspects of Indo-European Poetics (Watkins 1995) is very good too, but I have not read it myself.

        Chapter 2 of Indo-European Language and Culture: An Introduction (Fortson 2004) is a very good summary of PIE culture. The book is excellent in general - probably the best English language introduction to PIE linguistics at the moment - but outside the first two chapters it is not generally accessible to laymen.

        If you read other European languages it's worth checking out other books that may be available to you. The field by definition requires the ability to read a decent number of languages, so the literature is spread out across English, German, French, Russian, etc.

    • mr_toad 2 days ago

      With no centralised justice system people had to rely on people keeping their word. If people had lied and cheated all the time their society would have collapsed.

      An oath breaker didn’t just harm an individual, they did harm to the whole community, and the whole community viewed them negatively.

      • troad 2 days ago

        Yes, though that could apply to any pre-modern society, whereas the Indo-Europeans were especially caught up on contracts and hospitality, even compared to similar societies. It's all quite fascinating, and in the context of them going on take over a vast swathe of Eurasia, one wonders whether this kind of ritualised deal-making was a significant competitive advantage that their neighbours lacked at the time.

  • yieldcrv 2 days ago

    This nation’s largest institutions were founded on deceit from the colonies to the founding fathers.

    Aaron Burr raised capital for a fake water company and applied for a banking charter for what is now JP Morgan Chase

    some people are playing by a more effective set of rules and others are being lied to from a young age

refurb 2 days ago

The restructuring is designed in part to make OpenAI more attractive to investors

I'm not surprised in the least.

Who is going to give billions to a non-profit with a bizarre structure where you don't actually own a part of it but have some "claim" with a capped profit? Can you imagine bringing that to Delaware courts if there was disagreement over the terms? Investors can risk it if it's a few million, but good luck convincing institutional investors to commit billions with that structure.

At that point you might as well just go with a standard for-profit model where ownership is clear, terms are standard and enforceable in court and people don't have to keep saying "explain how it works again?".

sandwichmonger 2 days ago

Then why keep the name OpenAI?

  • HarHarVeryFunny 2 days ago

    This would have been the perfect time to change it, but maybe soon if not at same time as any official announcement.

    It's hard to say if there is much brand value left with "OpenAI" - lots of history, but lots of toxicity too.

    At the end of the day they'll do as well as they are able to differentiate and sell their increasingly commoditized products, in a competitive landscape where they've got Meta able to give it away for free.

  • charles_f 2 days ago

    It's a widely known brand, even by people outside of the industry. Why would they change it? Their AI was never really open to begin with, so nothing really change on that front

  • Mistletoe 2 days ago

    "Doublethink means the power of holding two contradictory beliefs in one's mind simultaneously, and accepting both of them". -1984

  • zmgsabst 2 days ago

    Microsoft needs to lie due to pervasive ill-will from their previous abuses.

    • high_na_euv 2 days ago

      Some people will always manage to blame MSFT, even for someones else shadiness, lol.

      Consider adding some EEE

breck 2 days ago

This is great. Sam tried the non-profit thing, it turned out not be a good fit for the world, and he's adapting. We all get to benefit from seeing how non-profits are just not the best idea. There are better ways to improve the world than having non-profits (for example, we need to abolish copyright and patent law; that alone would eliminate the need for perhaps the majority of non-profits that exist today, which are all working to combat things that are downstream of the the toxic information environment created by those laws).

  • garbanz0 2 days ago

    Yes, Altman not having 7% of the company was not a good fit for the world.

redbell 2 days ago

It's really hard to stick to your original goals after you achieve unexpected success. It's like a politician making promises before the elections but finding it difficult to keep them once elected.

On March 1st, 2023, a warning was already sounding: OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit (https://news.ycombinator.com/item?id=34979981)

uhtred 2 days ago

Fund your startup by masquerading as a non profit for a few years and collecting donations, genius!

The stinking peasants will never realize what's happening until it's too late to stop!

thih9 2 days ago

I’d guess it would be legally not possible to turn a non-profit into a for-profit company, no matter how confusing the company structure gets. And even (or rather, especially) if the project disrupts the economy on a global level. I’m not surprised that this is happening, but how we got here - I don’t know.

ChrisArchitect 2 days ago
  • lolinder 2 days ago

    That one never made it to the front page because reasons. All that discussion was people reading the story elsewhere and going looking for the HN thread.

    • ChrisArchitect 2 days ago

      never made it? That's the thread. The discussion is there. Long before this one. Lots of people saw it, commented on it. It's a dupe. Stop splitting the discussion up.

      • lolinder 2 days ago

        It was never above page 4 (position 90):

        https://hnrankings.info/41651548/

        This version of the thread is the first to have had any traction on the front page.

        When the algorithm artificially stops a topic from surfacing to the front page, the article that finally makes it past the algorithm's suppression is not a duplicate, it's the canonical copy.

        • ChrisArchitect 2 days ago

          So what if it didn't make the front page. That doesn't mean ppl didn't see it. Doesn't mean ppl aren't commenting on it. Maybe it's just not that interesting. There was also a lot of other big news at the same time with Meta etc taking attention. And followed by the other OpenAI news with Mira. Again, the discussion is there.

          • lolinder 2 days ago

            You're seriously going to argue that OpenAI changing to a for profit wasn't interesting enough to rise above page 4 of Hacker News? Doesn't the existence of this second thread disprove that claim pretty thoroughly?

            I'm pretty sure that what happened is that the Murati thread was id'd by the algorithm as the canonical OpenAI discussion, artificially suppressing the more interesting topic of the complete restructuring of the most important company in existence today.

            • ChrisArchitect 2 days ago

              The front page doesn't matter if lots of ppl are still seeing it. 300+ upvotes is plenty and the usual for a major news story in a week. It is in no way buried. Discussion can still be/should be merged. Then it'll have 1000 upvotes etc. Representing its true significance and not making us duplicate all of our discussion comments!

              • lolinder 2 days ago

                A lot of people get their tech news by looking at the front page of HN. An algorithm artificially stopping the day's most important news story from surfacing there, leading to the discussion only being found by people who actively go looking for that specific discussion because they learned about it elsewhere, is absolutely a big deal.

                I'm just glad that the Murati story falling off the front page allowed this one a second chance.

                • ChrisArchitect 2 days ago

                  A lot of people saw the story. Without searching. Maybe more by simply searching openAI. Traffic gets sent in from external feeds etc. It's not buried. But the conversation is all disjointed now. Merging the [dupe] only makes it better/stronger.

                  • lolinder 2 days ago

                    > A lot of people saw the story. Without searching.

                    Do you have a source for this? How did they find it if not on HN?

                    > Merging the [dupe] only makes it better/stronger.

                    Moving the ~50 comments from the other thread here makes a ton of sense. All I'm saying is that this is the canonical and the other is the dupe.

seanvelasco 2 days ago

the openai team, including the tech community, should've sided with the board, not sam. the fact that ilya had a hand in it should've given it weight and backing.

"openai is nothing without its people." well, the key people left. soon, it will just be sam and his sycophants.

e-clinton 2 days ago

So many red flags about Saltman. I have to imagine some of the investors are having second thoughts.

  • kranke155 2 days ago

    Some number must be people who only care about winning like Sam. Plus they're getting huge returns.

    I get strong "next Mark Zuckerberg" vibes from Sam. Build a zombie product that approaches worthlessness after a few years, but made himself hugely rich in the process, and buys off tech and people as needed to maintain some kind of relevance.

baradhiren07 2 days ago

Value vs Morality. Only time will tell who wins.

  • lenerdenator 2 days ago

    In many SV denizens' heads, they're one in the same.

    Which is why we need to reopen more asylums and bring back involuntary commitment.

xyst 2 days ago

Probably one of the many decisions that Mira and other original founders were against.

Sam Altman is a poison pill.

  • versteegen 2 days ago

    Mira joined in 2018. OpenAI was founded in 2015.

throwaway314155 2 days ago

Any reporting on the impact this is having on lower level employees? My understanding is they are all sticking around for their shares to vest (or RSU's I guess).

but still, you'd think some of them would have finally had enough and have enough opportunities elsewhere that they can leave.

rqtwteye 2 days ago

When will they start adding ads to the AI output? Seems that's the next logical step.

southernplaces7 2 days ago

It's hilarious that this should surprise anyone at all, or that Sam Altman is anything but a mendacious, self-serving, compulsive liar of the worst tech world kind. For example, Elon Musk gets lots fo hate for all kinds of things. Some of it is very valid, but much of it also goes to the point of there being derangement syndrome around him, partly (I suspect) because of his openly stated zeitgeist-contrary political beliefs, yet i'd pick his sometimes crude, bullying but fundamental openness about who he is any day over the shiny paint job of platitudes and false correctness found in someone like Altman. Not to mention that the overall value of Musk's companies trumps anything I've seen done by Altman's sludge-pumping AI technology so far. This last is of course not a moral judgement but a practical one.

  • replwoacause 2 days ago

    Objectively, they’re both no good.

EcommerceFlow 2 days ago

On what planet would Elon not get a piece of this new for-profit company?

johanneskanybal 2 days ago

I'm willing to pair up with fundamentalist christians to derail this with the argument that he/this is satan/the end of the world.

keepamovin 2 days ago

Monday to come after Sunday, in revised push for transparency

germandiago 2 days ago

What a surprise!!!! I would have never said so...

djohnston 2 days ago

When a company makes such a transition are they liable for any sort of backdated taxes/expenses they avoided as a non-profit?

roody15 2 days ago

Wait.. Sam Altman also owns (or did own) ycombinator?

Sunscratch 2 days ago

Should be renamed to NonOpenAI,or MoneyMattersAI

  • xzjis 2 days ago

    ClosedAI or PrivateAI

  • causal 2 days ago

    Saw someone on HN call it NopenAI

wg0 2 days ago

For profit it will be when it will be profitable.

RivieraKid 2 days ago

Good, I would do the same because it's a reasonable thing to do. It's easier to succeed as a for-profit.

  • j_maffe 2 days ago

    Succeed off of lies and deceit to gain goodwill from workers and governments.

scubadude 2 days ago

Stop believing that these companies and people are benevolent.

KoolKat23 2 days ago

I do wonder if this is why Mira left, as one of the non-profit board members.

bossyTeacher 2 days ago

There is an post with 500 comments that was posted before this one. Why didn't that post make it to the top? I know Y Combinator used to have Sama has a president but you can't censor this type of big news in this time and age

1024core 2 days ago

Now we know why people like Ilya, Brockman, Murati, etc. left the company.

kopirgan 2 days ago

Guess what they mean is for loss company

anon291 2 days ago

Wonder what happens to the employee's equity.

Traubenfuchs 2 days ago

Why were they a non-profit in the first case?

  • throw_m239339 2 days ago

    I imagine for tax reasons?

    Why the h are they called "openAI" too? nothing is open for them but your own wallet.

hooverd 2 days ago

Will they be rebranding to ClosedAI?

ocodo 2 days ago

Oh! You don't say.

unnouinceput 2 days ago

That's a lot of words for Micro$oft to say they just love money. Who knew!

msie 2 days ago

Quelle surprise.

surfingdino a day ago

Let me guess, the talent will be getting screwed on options?

hyggetrold 2 days ago

Reminds me of what my first-year econ professor in college once stated after disabusing myself and some other undergrads of our romantic notions about how life should work.

"Do I shock you? This is capitalism."

ein0p 2 days ago

Ethics aside, I think there’s a silver lining to all this: at least they believe this can be profitable

skadamat 2 days ago

Now the real question is - will they finally drop the "Open" part?

reducesuffering 2 days ago

OpenAI couldn't even align their Sam Altman and their people to their non-profit mission. Why should you ever believe they will align AGI to the well being of humanity?

What happened to all the people making fun of Helen Toner for attempting to fire Sama? She and Ilya were right.

ForHackernews 2 days ago

The good thing is, we need to worry about AGI because we already know what it's like in a world populated by soulless inhuman entities pursuing their own selfish aims at the expense of mankind.

seydor 2 days ago

an AGI is showering us with irony

geodel 2 days ago

Good. Now it is just a matter of profit-making company.

kidsil 2 days ago

And the enshittification process begins.

stonethrowaway 2 days ago

I’m waiting for pg and others to excuse this all by posting another apologetic penance which reminds us that founders are unicorns and everyone else is a pleb.

  • brap 2 days ago

    pg is sama’s biggest DR-er (interpret this as you will).

    They have that Michael Scott & Ryan energy.

Jatwood 2 days ago

shocked. shocked! well not that shocked.

yapyap 2 days ago

Become? lol

throwaway918299 2 days ago

Huh? I thought they already had for-profit and non-profit entities? Is the non-profit entity just going away (paywall)? gross.

upwardbound 2 days ago

Relatedly, dalant979 found this fascinating bit of history: https://old.reddit.com/r/AskReddit/comments/3cs78i/whats_the...

Yishan Wong describes a series of actions by Yishan and Sam Altman as a "con", and Sam jumps in to brag that it was "child's play for me" with a smiley face. :)

  • latexr 2 days ago

    > and Sam jumps in to brag

    I never read that as a brag, but as a sarcastic dismissal. That’s why it started with “cool story bro” and “except I could never have predicted”. I see the tone as “this story is convoluted” not as “I’ll admit to my plan now that you can’t do anything about it”.

    That’s not to say Sam isn’t a scammer. He is. It just doesn’t seem like that particular post is proof of it. But Worldcoin is.

    https://www.buzzfeednews.com/article/richardnieva/worldcoin-...

    https://www.technologyreview.com/2022/04/06/1048981/worldcoi...

    • upwardbound 2 days ago

      If I understand the history correctly, Yishan (the former Reddit CEO) is talking about himself when he talks about a CEO in this story, and so Yishan's post is a brag, with a thin denial tacked on at the end. That's why I believe that Sam (Yishan's friend) is also engaging in thinly-veiled bragging about these events.

      Here is Yishan's comment with his name spelled out for clarity instead of just saying "CEO":

          In 2006, reddit was sold to Conde Nast. It was soon obvious to many that the sale had been premature, the site was unmanaged and under-resourced under the old-media giant who simply didn't understand it and could never realize its full potential, so the founders and their allies in Y-Combinator (where reddit had been born) hatched an audacious plan to re-extract reddit from the clutches of the 100-year-old media conglomerate.
      
          Together with Sam Altman, they recruited a young up-and-coming technology manager [named Yishan Wong] with social media credentials. Alexis, who was on the interview panel for the new reddit CEO, would reject all other candidates except this one. The manager was to insist as a condition of taking the job that Conde Nast would have to give up significant ownership of the company, first to employees by justifying the need for equity to be able to hire top talent, bringing in Silicon Valley insiders to help run the company. After continuing to grow the company, [Yishan Wong] would then further dilute Conde Nast's ownership by raising money from a syndicate of Silicon Valley investors led by Sam Altman, now the President of Y-Combinator itself, who in the process would take a seat on the board.
      
          Once this was done, [Yishan Wong] and his team would manufacture a series of otherwise-improbable leadership crises, forcing the new board to scramble to find a new CEO, allowing Altman to use his position on the board to advocate for the re-introduction of the old founders, installing them on the board and as CEO, thus returning the company to their control and relegating Conde Nast to a position as minority shareholder.
      
          JUST KIDDING. There's no way that could happen.
      
      -- yishanwong

      My understanding of what Sam meant by "I could never have predicted the part where you resigned on the spot" was that he was conveying respect for Yishan essentially out-playing Sam at the end (the two of them are friends) by distancing himself (Yishan) from the situation and any potential liability in order to leave Sam "holding the bag" of possible liability.

  • benterix 2 days ago

    The board drama part and key people leaving seem oddly familiar.

aoeusnth1 2 days ago

The IRS should get involved. This is a cut and dry case of embezzlement of 501c3 resources.

widerporst 2 days ago

The fact that this has just disappeared from the front page for me, just like the previous post (https://news.ycombinator.com/item?id=41651548), somehow leaves a bitter taste in my mouth.

  • nitsuaeekcm 2 days ago

    Look at the URL. It’s because the original WSJ title was “OpenAI Chief Technology Officer Resigns,” which was a dupe of yesterday’s discussions. WSJ changed the title yesterday evening.

  • mattcollins 2 days ago

    I noticed that, too. It does seem 'odd'.

  • jjulius 2 days ago

    I found this on the front page an hour after you made this comment.

  • Davidzheng 2 days ago

    Yeah hope for some transparency here

  • davidcbc 2 days ago

    You can't post content critical of some HN darlings without being flagged by their fans

    • jjulius 2 days ago

      How do you explain all of the constant unflagged criticism of OpenAI and Sam Altman throughout nearly every OpenAI thread? I mean, look around at all of the comments here...

davesmylie 2 days ago

Ahh. What a surprise - no-one could have predicted this

  • freitasm 2 days ago

    Seeing so much money rolling in, hard to not want a slice of the pie.

    • vrighter 2 days ago

      What money? Aren't most (all?) AI companies are operating at a loss?

      • jsheard 2 days ago

        The ones selling the shovels are doing well, but otherwise yeah nobody is making any money.

        For the true believers that's just a temporary setback on the way to becoming trillionaires though.

        • alan-hn 2 days ago

          I think the people getting salaries are doing just fine

        • opdahl 2 days ago

          Well they are spending billions to make shovels that they are selling for millions (roughly).

          • jsheard 2 days ago

            The shovel salesmen in this case are the likes of Nvidia, Huggingface and Runpod who are on the receiving end of the billions that AI model salesmen are spending to make millions in revenue. HF are one of the vanishingly few AI-centric startups who claim to already be profitable, because they're positioned as a money sink for the other AI-centric startups who are bleeding cash.

      • bschmidt1 2 days ago

        OpenAI is losing billions in the way Uber lost billions - through poor management.

        When/if Altman ever gets out of the way like Travis K did with Uber then the real business people can come in and run the company correctly. Exactly like what happened with Uber - who never turned a profit under that leadership in the US and had their lunch eaten by a Chinese knock-off for years abroad. Can't have spoiled brats in charge, they have no experience and are wasteful and impulsive. Especially like Altman who has no engineering talent either. What is his purpose in OpenAI? He can't do anything.

      • jasonlotito 2 days ago

        Hi, you are new here. Welcome to tech.

    • bdjsiqoocwk 2 days ago

      Only hard if you're that kind of person. Not everyone is like that. And those kind of people have difficulty believing this.

      • pdpi 2 days ago

        It's a "courage isn't the absence of fear" sort of situation.

        I don't think there's many people out there who would not be tempted at all to take some of that money for themselves. Rather, people are willing and able to rise above that temptation.

        • causal 2 days ago

          Ehh, there's a lot of space between "desperately in need" and "wanting to seize billions in equity".

      • mattmaroon 2 days ago

        Everyone is like that when the number is potentially in the trillions. There are just people who are like that and people who think they aren’t because they’ve never been within a digit grouping of it.

        • sgu999 2 days ago

          Illustrating that part of the parent comment:

          > And those kind of people have difficulty believing this.

        • ben_w 2 days ago

          > Everyone is like that when the number is potentially in the trillions

          No, we're really not all like that.

          I stopped caring about money at 6 digits a decade ago, and I'm not even at 7 digits now because I don't care for the accumulation of stuff — if money had been my goal, I'd have gone to Silicon Valley rather than to Berlin, and even unexciting work would have put me between 7 and 8 digits by this point.

          I can imagine a world in which I had made "the right choices" with bitcoin and Apple stocks — perfect play would have had me own all of it — and then I realised this would simply have made me a Person Of Interest to national intelligence agencies, not given me anything I would find more interesting than what I do with far less.

          I can imagine a future AI (in my lifetime, even) and a VN replicator, which rearranges the planet Mercury into a personal O'Neill cylinder for each and every human — such structures would exceed trillions of USD per unit if built today. Cool, I'll put a full-size model of the Enterprise D inside mine, and possibly invite friends over to play Star Fleet Battles using the main bridge viewscreen. But otherwise, what's the point? I already live somewhere nice.

          > There are just people who are like that and people who think they aren’t because they’ve never been within a digit grouping of it.

          Does it seem that way to you because you yourself have unbounded desire, or because the most famous business people in the world seem so?

          People like me don't make the largest of waves. (Well, not unless HN karma counts…)

          • snapcaster 2 days ago

            You think buying apple stock and bitcoin would put you on the radar of intelligence agencies? Wouldn't that grouping be some massive number of middle class millennials?

            • ben_w 2 days ago

              Perhaps I wasn't clear. When I wrote:

              > perfect play would have had me own all of it

              I meant literally all of it: with perfect play and the benefit of hindsight, starting with the money I had in c. 2002 from summer holiday jobs and initially using it for Apple trades until bitcoin was invented, it was possible to own all the bitcoin in existence with the right set of trades.

              Heck, never mind perfect play, at current rates two single trades would have made me the single richest person on the planet: buying $1k of Apple stock at the right time, then selling it all for $20k when it was 10,000 BTC for some pizzas.

              (But also, the attempt would almost certainly have broken the currency as IMO there's not really that much liquidity).

              • snapcaster a day ago

                I don't think I get your point, if any of us knew lottery numbers or roulette outcomes ahead of time we could get rich of course. Are you claiming you did have this knowledge but discarded it?

                • ben_w 11 hours ago

                  I'm saying I role-play the scenario of being valued at order-of a trillion USD. Or quarter trillion if it's the two specific trades in the example above.

                  Being that rich doesn't lead my imagination to new happiness that I don't already possess, only new stresses that I don't already posess. I don't want a super yacht, a personal jet, nor a skyscraper with my name on it, and a private island is less appealing to me than a tourist destination.

                  Knowing this about myself, I don't need to chase higher pay or timing the markets to get things I can't currently afford, instead I focus on things that I do like. Those things in aggregate cost less than €12k/year, including travel.

        • phito 2 days ago

          There definitely are people who aren't like that. Not a lot for sure, but they exist.

        • bdjsiqoocwk 2 days ago

          What you're showing me is you don't know very many different people.

        • mandmandam 2 days ago

          There are many examples through history proving you wrong.

          * Frederick Banting sold the patent for insulin to the University of Toronto for just $1.

          * Tim Berners-Lee decided not to patent the web, making it free for public use.

          * Jonas Salk refused to patent the polio vaccine - "can you patent the sun?"

          * Richard Stallman and Linus Torvalds could have easily sold humanity out for untold billions.

          * Chuck Feeny silently gave away $8bn, keeping only a few million.

          ... And in any case, this is an extreme situation. AI is an existential threat/opportunity. Allowing it to be sidestepped into the hands of Sam "sell me your retinal scans for $50" Altman is fucking insane, and that's putting it lightly.

          • Vespasian 2 days ago

            I'm very happy that the EU got into the game early and started regulating AI.

            It's way easier to adapt an existing framework one way or the other if the political part is already done.

            I don't trust the AI industry to be a good stewart even less then the tech industry in general and when the area where I live has a chance at avoiding the worst outcome (even if at a price) in this technological transition I'm taking it.

          • Clubber 2 days ago

            Also, most of the robber barons of the early Industrial Revolution gave away all or most their wealth.

            https://factmyth.com/factoids/the-robber-barons-gave-most-of...

            https://www.carnegie.org/about/our-history/gospelofwealth/

            • mattmaroon 2 days ago

              Giving away much of one’s wealth is very different than choosing not to accumulate it in the first place.

              It’s hard to find someone who has gotten to a position where they might have a reasonable shot at becoming the world’s wealthiest person who doesn’t think they’d be a great steward of the wealth. It makes much more sense for a titan of industry to make as much as they can and then give much away than it does to simply not make it.

          • mattmaroon 2 days ago

            These are all examples of people who were not even remotely looking at the sums of money involved in AGI, both in terms of investment required and reward. I used “trillions” rather than “billions” for a reason. Inflation adjust it all you want, none of these passed up 1/10th of this opportunity.

            • mandmandam 2 days ago

              It's possible Tim Berners-Lee gave up hundreds of billions.

              Regardless, you've missed the point. Some people value their integrity over $trillions, and refuse to sell humanity out. Others would sell you out for $50.

              Or to put it another way: Some people have enough, and some never will.

              • mattmaroon 2 days ago

                It is not possible he could have thought that in 1992. It’s probably not even possible that he passed up one billion. Had he tried he’d likely have lost out to open standards like so many others did.

                You could prove me wrong easily. Find someone who raised (inflation adjusted) tens of billions, had the opportunity to make trillions, and declined. You can’t. You can likely take that down two orders of magnitude and still not succeed.

                People who did some work on their own and open sourced something small that turned into something huge are not even close to what we’re talking here. They didn’t turn down a trillion, they turned down $100k that turned into a trillion later.

                It isn’t about integrity. It’s about how humans rationalize. “Nobody is being hurt here.” “This can improve humanity.” “I’ll do good things with the money.”

                It’s easy to say people have “integrity” when you just define it as adhering to your belief system when the stakes seem low, not their belief system when the stakes are clearly high.

                • mandmandam a day ago

                  > It isn’t about integrity.

                  Oh, but it is. That's exactly what this is about.

                  As pointed out much earlier in the thread, it's generally the people who lack integrity that fail to acknowledge or recognize it in others.

                  "For what shall it profit a man, if he shall gain the whole world, and lose his own soul?"

                  - You don't have to be Jesus to understand the sense of this quote. Just be honest and observant.

                  • mattmaroon a day ago

                    Right but you’re defining integrity in a different way than anyone who has ever raised $10b and had a company (or whatever you want to call OpenAI, I have never really known how to refer to its unique arrangement) that was worth $150b and rapidly climbing.

                    They would tell you (and sincerely believe it) that it being a for profit is better for a whole list of reasons. They believe they have integrity. They don’t believe they’ve lost their soul. They believe they’re doing a whole lot of good for the world.

                    That’s my point. The denizens of HN think they lack integrity for this, I think they just define it differently and anyone playing for trillions would define it their way.

  • coffeebeqn 2 days ago

    Will they finally rename the company?

    • pelagicAustral 2 days ago

      Omni Consumer Products, or Altman-Yutani Corporation would be nice

    • tokai 2 days ago

      ClopenAI

      • lioeters 2 days ago

        I hope some journalist popularizes "clopen" as a neologism to describe organizations that claim to be open and transparent but in practice are closed and opaque.

        Or "clopen-source software", projects that claim to be open-source but vital pieces are proprietary.

        • beepbooptheory 2 days ago

          Probably most people here don't know this (and probably not totally universal), but a "clopen" is what you call it when you have to work a morning shift the day after having worked an evening shift.

          • flkenosad 2 days ago

            This is the first thing I thought of.

        • wazdra 2 days ago

          Idk if that was parent's ref, but clopen is a term used in topology

    • ericjmorey 2 days ago

      Is the name Worldcoin available?

      • timeon 2 days ago

        It is not. But you can remove some letter like 'i' for example.

    • ognyankulev 2 days ago

      Renaming to "TheAI" would match their ambition for AGI.

      • sd9 2 days ago

        Drop the "The", just "AI"

        • pluc 2 days ago

          Altman Intelligence Inc.

          • romanhn 2 days ago

            Altman Inc seems sufficient enough

    • timeon 2 days ago

      Sure. Since AI is closed now, they will try 'OpenAGI'.

    • mc32 2 days ago

      PigpenAI

allie1 2 days ago

"Shocking!" It's a shame that one of the biggest advancements of our time has come about in as sleazy a way as it has.

Reputationally... the net winner is Zuck. Way to go Meta (never thought I'd think this).

limit499karma 2 days ago

Reuters had the exclusive yesterday but somehow it never surfaced for long here:

"OpenAI to remove non-profit control and give Sam Altman equity"

https://www.reuters.com/technology/artificial-intelligence/o...

  • blackeyeblitzar 2 days ago

    It appeared but was buried on the second page and never made it to the front page, for some weird reason. Some in the discussion speculated that it was due to a flame war detection algorithm:

    https://news.ycombinator.com/item?id=41651548

    • CatWChainsaw 2 days ago

      "for some weird reason" when Altman was an important player at YC and PG still sings his praises.

AI_beffr 2 days ago

i remember years ago i saw a video of sam altman interviewing elon musk. it was filmed inside the spacex factory. maybe you know the one? i didnt know who sam was at the time. i remember being very, very put off by the way sam was behaving. he had this bizarre, almost unbelievable expression on his face, almost like he was pantomiming as a child looking up at his parents, adoring them. this weird, fake shy-smile. and i remember immediately having an intense disliking of him. this person seemed extremely fake, manipulative and narcissistic. it was such low-level behavior that i thought it must be some intern or someone way out of their depth. this interview is some kind of fluke. its so unbelievably insane to me that this person, who i disliked so much that i remembered him even without knowing his name or who he was, is now at the helm of one of the most important developments in human history. and the subject of todays headline is no surprise at all... i think everyone should think very carefully about the fact that sam altman will at some point, probably, be the very first person in the world to sit down in front of a console and hold the reigns directly and without supervision to a super-intelligent system that does not bear any of the regulatory or moral restrictions that would stop it from taking over the world. this evil narcissist, liar, money hungry, power grabbing A* hole will hold the most power that any human has ever held. do you really want that?

Kim_Bruning 2 days ago

I guess this vindicates the (original) OpenAI Board, when they tried to fire Sam Altman.

neilv 2 days ago

This post somehow fell off the front page before California wakes up (9:07 ET), but not buried deep like buried posts usually are:

> 57. OpenAI to Become For-Profit Company (wsj.com) 204 points by jspann 4 hours ago | flag | hide | 110 comments

piyuv 2 days ago

I hope they rename the company soon, it’s a disgrace to call it “open”

croes 2 days ago

And suddenly Altman's firing no longer seems so crazy

  • crystal_revenge 2 days ago

    "suddenly"?

    I was under the impression that most people saw this coming years ago. The second "Open"AI refused to release the weights for GPT-2 for our "safety" (we can see in hind sight how obviously untrue this was, but most of us saw it then too) it was clear that they were headed towards profitability.

haliskerbas 2 days ago

Woah, the pompous eccentric billionaire(?) is actually not altruistic, never heard this story before!

/s

hello_computer 2 days ago

another mozilla. it’s time for guillotines. past time.

lenerdenator 2 days ago

Are you meaning to tell me that the whole nonprofit thing was just a shtick to get people to think that this generation of SV "founders" was going to save the world, for real this time guys?

I'm shocked. Shocked!

I better stock up on ways of disrupting computational machinery and communications from a distance. They'll build SkyNet if it means more value for shareholders.

  • Eliezer 2 days ago

    This is not how nonprofits usually work. This is blatant fraud. I cannot think of any other case besides OpenAI of this particular shenanigan being pulled.

    • ummonk 2 days ago

      The question isn't whether it has happened before, but whether they will get away with it.

whywhywhywhy 2 days ago

This is for the best really, I can't even think of a non-profit in tech where over time it hasn't just become a system for non-productives to leech from a successful bit of technology while providing nothing and at times even stunting it's potential and burning money on farcical things.

alexowner1988 2 days ago

Сизнингча, 1win платформаси орқали спортга ставка қилиш қанчалик қулай ва фойдалими? Мен 1win uz (https://1win-uz.online/) сайтида қизиқарли стратегияларни топдим, улар муваффақиятга эришиш имкониятларини ошириши мумкин. Сиз қандай маслаҳатлар бера оласиз?

imdsm 2 days ago

Lot of people unhappy about this yet not at all unhappy (or even caring) about the 1,000s of others who started out for profit. And while we're all here hacking away (we're hackers, right?) many of us with startups, what is it we're chasing? Profit, money, time, control. Are we different except in scale? Food for thought.

  • bayindirh 2 days ago

    It's not what they're doing (trying to earn money), but it's how they're doing it (in a very unethical and damaging way), while trying to whitewash themselves.

  • goodluckchuck 2 days ago

    It’s criminal. Many people donated money, worked for them, gave data, etc. on the promise that OpenAI was working towards the public good. It turns out those transactions occurred under false pretenses. That’s fraud.

  • quonn 2 days ago

    How is this food for thought? OpenAI had an unfair advantage by starting out as a non-profit.

    • imdsm 2 days ago

      What stopped others doing this? Or is stopping others doing this?

      • consteval 2 days ago

        Their conscious? The fact they aren't pieces of shit?

        I'm sorry, have we gotten so far up our own asses as a profession that we no longer just excuse unethical behavior, we actually encourage it?

  • dleeftink 2 days ago

    > Profit, money, time, control

    I feel this only scratches the surface of what to chase in life. And in respect to a potentially singular, all-knowing piece of technology, not necessarily a goal people want to embue.

  • gdhkgdhkvff 2 days ago

    In any thread about companies that have some amount of hype around them, it’s difficult to tell the difference between comments coming from people with legitimate concerns about the issues at hand vs cynical people that have found their latest excuse to glom on to outrage against hype.

  • ashkankiani 2 days ago

    Your food is undercooked

    • imdsm 2 days ago

      That's a little unfair.

      If you don't mind me asking, what generation are you from? Perchance you're newer than me to Earth, among those who find it hard that others have different opinions?

  • neprune 2 days ago

    I see your point but I think it's fine to be angry and disappointed that an outfit that appeared to be trying to do it differently has abandoned the effort.

    • imdsm 2 days ago

      Perhaps it's the only way to survive?

      And what comes first, the mission, or being able to tell people you did your best but failed to build the thing you set out to build?

      Perhaps most of us are more interested in fairness than progress, and that's fine.

retskrad 2 days ago

Altman and OpenAI deserve their success. They’ve been key to the LLM revolution and the push toward AGI. Without their vision to make a product out of an LLM that hundreds of millions of people now use and have greatly enriched their lives, companies like Microsoft, Apple, Google, and Meta wouldn’t have invested so heavily in AI. While we’ve heard about the questionable ethics of people like Jobs, Musk, and Altman, their work speaks for itself. If they’re advancing humanity, do their personal flaws really matter?

  • infinitezest 2 days ago

    > advancing humanity Perhaps but I'd say it's more of a mixed bag. Cell phones and social media have done harm and good at very large scales. As Dan Carlin once said, it feels like we're babies crawling toward hand guns. We don't seem like we're as wise as we are technically proficient.

    Oppenheimer "advanced humanity" by giving us nuclear power. Cool. I love cheap energy. Unfortunately, there were some uh... "unfortunate side-effects" which continue to plague us.

  • aiono 2 days ago

    Do you really want people who have a lot of power to have serious flaws? Looking back into history it doesn't end up good usually.

  • jimkoen 2 days ago

    > If they’re advancing humanity, do their personal flaws really matter?

    What's being discussed in this thread is not the personal failings of silicon valley darlings, but whether one of them just defrauded a few thousand people and embezzled a significant amount of capital. Citing his character flaws goes along with it though.

    Are you seriously arguing that people should be exempt from law for "advancing humanity"? Because I don't see any advancements whatsoever from all of the people mentioned. Altman and Musk would get a hardon for sure though, from being mentioned together with Jobs.

  • idle_zealot 2 days ago

    > If they’re advancing humanity, do their personal flaws really matter?

    Well, yeah, they're positioning themselves as some of the most powerful and influential individuals on earth. I'd say any personality flaws are pretty important.

  • CatWChainsaw 2 days ago

    They matter MORE. Are you like, completely unfamiliar with Spiderman and the whole great power great responsibility line??

  • cebu_blue 2 days ago

    Elon Musk isn't "advancing humanity".

    • PierceJoy 2 days ago

      I'm very far from a musk fan, and if you want to make the case that musk isn't responsible for Tesla, SpaceX, and Starlink I think that's a legitimate argument to be made. But I don't think there's much argument to be made that those 3 companies are not advancing humanity.

      • cbeach 2 days ago

        Tesla and SpaceX would not exist OR prosper, without Musk.

        If you want to understand why, read the Walter Isaacson biography of Musk (which is based on accounts by his friends, enemies and employees). He's a hard-arsed manager, he is technically involved at all levels of the company, he is relentless, and he takes risks and iterates like no other CEO.

        • PierceJoy a day ago

          Considering Musk isn't the founder of Tesla, that's obviously not true. He is the founder of SpaceX so that's probably true that it wouldn't exist without him.

          Walter Isaacson doesn't have the best reputation for covering his subjects objectively to be fair. If your source for what Elon has or hasn't done is Isaacson, you aren't standing on very solid ground.

          The bigger picture point though is that you can easily argue that the employees at those companies, and not a single man, are responsible for the success of those companies. We give far too much credit to CEOs.

    • cbeach 2 days ago

      I'm sure there were people that claimed Nikola Tesla or Henry Ford weren't "advancing humanity" at the time.

      There will always be people who disagree with the politics/opinions/alleigances of a successful person and who wish to downplay their success for selfish reasons.

      • squidsoup 2 days ago

        Please don't besmirch Tesla's good name by comparing him to Musk.

      • PierceJoy a day ago

        > There will always be people who disagree with the politics/opinions/alleigances of a successful person and who wish to downplay their success for selfish reasons.

        And conversely, there will always be people who agree with the politics/opinions/alleigances of a successful person and who wish to overstate the reasons behind their success for selfish reasons.