navjack27 3 days ago

Okay hear me out. Restructuring for profit right? There will probably be companies spawned off of all of these leaving.

If the government ever wants a third party to oversee safety of openAI wouldn't it be convenient if one of those that left the company started a company that focused on safety. Safe Superintelligence Inc. gets the bid because lobbying because whatever I don't even care what the reason is in this made up scenario in my head.

Basically what I'm saying is what if Sam is all like "hey guys, you know it's inevitable that we're going to be regulated, I'm going for profit for this company now, you guys leave and later on down the line we will meet again in an incestuous company relationship where we regulate ourselves and we all profit."

Obviously this is bad. But also obviously this is exactly exactly what has happened in the past with other industries.

Edit: The man is all about the long con anyway. - https://old.reddit.com/r/AskReddit/comments/3cs78i/whats_the...

Another edit: I'll go one further on this a lot of the people that are leaving are going to double down on saying that open AI isn't focused on safety to build up the public perception and therefore the governmental perception that regulation is needed so there's going to be a whole thing going on here. Maybe it won't just be safety and it might be other aspects also because not all the companies can be focused on safety.

  • snowwrestler 2 days ago

    I think the departures and switch to for-profit model may point in a different direction: that everyone involved is realizing that OpenAI’s current work is not going to lead to AGI, and it’s also not going to change.

    So the people who want to work on AGI and safety are leaving to do that work elsewhere, and OpenAI is restructuring to instead focus on wringing as much profit as possible out of their current architecture.

    Corporations are actually pretty bad at doing tons of different things simultaneously. See the failure of huge conglomerates like GE, as well as the failure of companies like Bell, Xerox, and Microsoft to drive growth with their corporate research labs. OpenAI is now locked into a certain set of technologies and products, which are attracting investment and customers. Better to suck as much out of that fruit as possible while it is ripe.

    • jordanb an hour ago

      > See the failure of huge conglomerates like GE

      GE was a successful company as a major conglomarate which made aircraft engines, railroad locomotives and light bulbs.

      GE was a failure as a financialized "engine of financial performance" that was focused entirely on spinning off businesses, outsourcing, and speculating in the debt derivatives market.

    • mnky9800n 2 days ago

      I feel like it's unfair to expect growth to remain within your walls. bell and Xerox both drove a lot of growth. That growth just left bell and Xerox to go build things like intel and apple. They didn't keep it for themselves and that's a good thing. Could you imagine if the world was really like those old at&t commercials and at&t was actually the ones bringing it to you? I would not want a monolithic at&t providing all technology.

      https://youtu.be/xBJ2KXa9c6A?si=pB67u56Apj7gdiHa

      I do agree with you. They are locked into pulling value out of what they got and they probably aren't going to build something new.

  • crossroadsguy 2 days ago

    I think it's rather one of these:

    1. Naah, it's not gonna lead to AGI, not here at least.

    2. If it's gonna be a for profit then why the hell I should stick here - maybe go somewhere that pays me more, or maybe I will start my own dig.

    3. Or maybe, selling the snake oil is more profitable if I start my own brand of snake oil which is kinda close to point 2 anyway.

  • neycoda 2 days ago

    Now that AI has exploded, I keep thinking about that show called Almost Human, that opened describing a time when technology advanced so fast that it was unable to be regulated.

    • navjack27 2 days ago

      As long as government runs slowly and industry runs fast it's inevitable.

  • whiplash451 2 days ago

    The baptists and the bootleggers

    https://a16z.com/ai-will-save-the-world/

    • crossroadsguy 2 days ago

      Is there no date on that post or I missed it somewhere? I mean it might be on the lines that if the oracle says something it becomes dateless. Yeah, that could be it.

      • peanball 2 days ago

        Right at the top (at least on mobile): “Posted June 6, 2023”

  • squigz 2 days ago

    > But also obviously this is exactly exactly what has happened in the past with other industries.

    Could you give some examples?

  • philosopher1234 2 days ago

    These are some serious mental gymnastics. It depends on:

    1. The government providing massive funds for AI safety research. There is no evidence for this. 2. Sam Altman and everyone else knowing this will happen and planning for it. 3. Sam Altman, amongst the richest people in the world, and everyone else involved, not being greedy. (Despite the massive evidence of greed) 4. San altman heroically abandoning his massive profits down the line.

    Also, even in your story, Sam Altman profits wildly and is somehow also not motivated by that profit.

    On the other hand, a much simpler and more realistic explanation is available: he wants to get rich.

  • jadtz 2 days ago

    Why would government care about safety? They already have the former director of NSA, sitting member of the board.

    • navjack27 2 days ago

      Why would they have the FCC? Why would they have FDA? Why would people from industry end up sitting on each of these things eventually?

      EDIT: oh and by the way i'm very for bigger government and more regulations to keep corpos in line. i'm hoping i'm wrong about all of this and we don't end up with corruption straight off the bat.

bansheeps 3 days ago

Update: Looks like Barret Zoph, GPT-4's post training (co-)lead is also leaving: https://x.com/barret_zoph/status/1839095143397515452

  • yas_hmaheshwari 3 days ago

    Whoa! This definitely looks much more troubling for the company now. Can't decide it is because AGI is coming very soon OR AGI is very far away

    • berniedurfee 2 days ago

      This is the money grab part of the show.

      As LLM capabilities start to plateau, everyone with any sort of name recognition is scrambling to ride the hype to a big pay day before reality catches up with marketing.

    • unsupp0rted 3 days ago

      It's probably neither of those things. People can only be pissed off + burnt out for so long before they throw up their hands and walk out. Even if AGI is a random number of months away... or isn't.

    • riazrizvi 2 days ago

      Seems obvious to me that the quality of the models is not improving since GPT-4. The departures I’m guessing are a problem talent has with ‘founder mode’, Altman’s choice of fast pace, this absence of model improvement with these new releases, and the relative temptation of personal profit outside of OpenAI’s not-for-profit business model. People think they can do better in control themselves. I suspect they are all under siege with juicy offers of funding and opportunities. Whether or not they will do better is another story. My money is on Altman, I think he is right on the dumpster rocket idea, but it’s very difficult to see that when you’re a rocket scientist.

    • faangguyindia 2 days ago

      No it's more like, with the direction of company there are lots of unhappy people - but this gave them the right opportunity (and optics) to leave (now they don't need to justify leaving this corp to their friend)

    • domcat 2 days ago

      Looks like far away is more reasonable.

    • freefaler 3 days ago

      If they had any equity they might've vested and decided there is more to life than working there. It's hard to be very motivated to work at high pace when you can retire any moment without losing your lifestyle.

      • jprete 3 days ago

        I cannot actually believe that of anyone working at OpenAI, unless the company internal culture has gotten so unpleasant that people want to quit. Which is a very different kind of change, but I can't see them going from Bell Labs to IBM in less than ten years.

      • trashtester 2 days ago

        I'm guessing that most key players (Mira, Greg, Ilya, etc) negotiated deals last winter (if not before) that would ensure they kept their equity even if leaving, in return for letting Sam back in.

        Probably with some form of NDA attached.

  • d--b 3 days ago

    These messages really sound like written under threat. They have a weird authoritarian regime quality . Maybe they just had ChatGpt write it though.

    • jprete 3 days ago

      It's way simpler than that, people don't burn bridges unless it's for a good reason.

      I do think that whoever Bob is, they probably really are a good manager. EDIT: I guess that's Bob McGrew, head of research, who is now also leaving.

keeptrying 3 days ago

If OpenAI is the foremost in solving the AGI - possibly the biggest invention of mankind - it's a little weird that everyone's dropping out.

Does it not look like that no one wants to work with Sam in the long run?

  • lionkor 3 days ago

    Maybe its marketing and LLMs are the peak of what they are capable of.

    • bmitc 2 days ago

      I continue to be surprised by the talk of general artifical intelligence when it comes to LLMs. At their core, they are text predictors, and they're often pretty good at that. But anything beyond that, they are decidely unimpressive.

      I use Copilot on a daily basis, which uses GPT 4 in the backend. It's wrong so often that I only really use it for boilerplate autocomplete, which I still have to review. I've had colleagues brag about ChatGPT in terms of code it produces, but when I ask how long it took in terms of prompting, I'll get an answer of around a day, and that was even using fragments of my code to prompt it. But then I explain that it would take me probably less than an hour from scratch to do what it took them and ChatGPT a full day to do.

      So I just don't understand the hype. I'm using Copilot and ChatGPT 4. What is everyone else using that gives them this idea that AGI is just around the corner? AI isn't even here. It's just advanced autocomplete. I can't understand where the disconnect is.

      • Sunhold 2 days ago

        Look at the sample chain-of-thought for o1-preview under this blog post, for decoding "oyekaijzdf aaptcg suaokybhai ouow aqht mynznvaatzacdfoulxxz". At this point, I think the "fancy autocomplete" comparisons are getting a little untenable.

        https://openai.com/index/learning-to-reason-with-llms/

        • ToucanLoucan 2 days ago

          I’m not seeing anything convincing here. OpenAI says that it’s models are better at reasoning and asserts they are testing this by comparing how it does solving some problems between o1 and “experts” but it doesn’t show the experts or o1s responses to these questions nor does it even deign to share what the problems are. And, crucially, it doesn’t specify if writings on these subjects were part of training data.

          Call me a cynic here but I just don’t find it too compelling to read about OpenAI being excited about how smart OpenAIs smart AI is in a test designed by OpenAI and run by OpenAI.

          • NoGravitas 2 days ago

            "Any sufficiently advanced technology is indistinguishable from a rigged demo." A corollary of Clarke's Law found in fannish circles, origin unknown.

            • ToucanLoucan 2 days ago

              Especially given this tech's well-documented history of using rigged demos, if OpenAI insists on doing and posting their own testing and absolutely nothing else, a little insight into their methodology should be treated as the bare fucking minimum.

        • HarHarVeryFunny 2 days ago

          It depends on how well you understand how the fancy autocomplete is working under the hood.

          You could compare GPT-o1 chain of thought to something like IBM's DeepBlue chess-playing computer, which used MTCS (tree search, same as more modern game engines such as AlphaGo)... at the end of the day it's just using built-in knowledge (pre-training) to predict what move would most likely be made by a winning player. It's not unreasonable to characterize this as "fancy autocomplete".

          In the case of an LLM, given that the model was trained with the singular goal of autocomplete (i.e. mimicking the training data), it seems highly appropriate to call that autocomplete, even though that obviously includes mimicking training data that came from a far more general intelligence than the LLM itself.

          All GPT-o1 is adding beyond the base LLM fancy autocomplete is an MTCS-like exploration of possible continuations. GPT-o1's ability to solve complex math problems is not much different from DeepBlue's ability to beat Garry Kasparov. Call it intelligent if you want, but better to do so with an understanding of what's really under the hood, and therefore what it can't do as well as what it can.

          • int_19h 2 days ago

            Saying "it's just autocomplete" is not really saying anything meaningful since it doesn't specify the complexity of completion. When completion is a correct answer to the question that requires logical reasoning, for example, "just autocomplete" needs to be able to do exactly that if it is to complete anything outside of its training set.

            • HarHarVeryFunny 2 days ago

              It's just a shorthand way of referring to how transformer-based LLMs work. It should go without saying that there are hundreds of layers of hierarchical representation, induction heads at work, etc, under the hood. However, with all that understood (and hopefully not needed to be explicitly stated every time anyone wants to talk about LLMs in a technical forum), at the end of the day they are just doing autocomplete - trying to mimic the training sources.

              The only caveat to "just autocomplete" (which again hopefully does not need to be repeated every time we discuss them), is that they are very powerful pattern matchers, so all that transformer machinery under the hood is being used to determine what (deep, abstract) training data patterns the input pattern best matches for predictive purposes - exactly what pattern(s) it is that should be completed/predicted.

            • consteval 2 days ago

              > question that requires logical reasoning

              This is the tough part to tell - are there any such questions that exist that have not already been asked?

              The reason Chat-GPT works is its scale. to me, that makes me question how "smart" it is. Even the most idiotic idiot could be pretty decent if he had access to the entire works of mankind and infinite memory. Doesn't matter if his IQ is 50, because you ask him something and he's probably seen it before.

              How confident are we this is not just the case with LLMs?

              • HarHarVeryFunny 2 days ago

                I'm highly confident that we haven't learnt every thing that can be learnt about the world, and that human intelligence, curiosity and creativity are still being used to make new scientific discoveries, create things that have never been seen before, and master new skills.

                I'm highly confident that the "adjacent possible" of what is achievable/discoverable today, leveraging what we already know, is constantly changing.

                I'm highly confident that AGI will never reach superhuman levels of creativity and discovery if we model it only on artifacts representing what humans have done in the past, rather than modelling it on human brains and what we'll be capable of achieving in the future.

              • int_19h 2 days ago

                Of course there are such questions. When it comes to even simple puzzles, there are infinitely many permutations possible wrt how the pieces are arranged, for example - hell, you could generate such puzzles with a script. No amount of precanned training data can possibly cover all such combinations, meaning that the model has to learn how to apply the concepts that make solution possible (which includes things such as causality or spatial reasoning).

                • consteval 2 days ago

                  Right, but typically LLMs are really poor at this. I can come up with some arbitrary systems of equations for it to solve and odds are it will be wrong. Maybe even very wrong.

          • HaZeust 2 days ago

            At that point, how are you not just a fancy autocomplete?

            • HarHarVeryFunny 2 days ago

              Well, tons of ways. I can't imagine what an "autocomplete only" human would look like, but it'd be pretty dire - maybe like an idiot savant with a brain injury who could recite whole books given the opening sentence, but never learn anything new.

        • lionkor 2 days ago

          Fun little counterpoint: How can you _prove_ that this exact question was not in the training set?

        • bmitc 2 days ago

          How exactly does a blog post from OpenAI about a preview release address my comment or make fancy autocomplete comparisons untenable?

          • Sunhold 2 days ago

            It shows that the LLM is capable of reasoning.

            • bmitc 2 days ago

              No, it doesn't. You can read more when that was first posted to Hacker News. If I recall and understand correctly, they're just using the output of sublayers as training data for the outermost layer. So in other words, they're faking it and hiding that behind layers of complexity

              The other day, I asked Copilot to verify a unit conversion for me. It gave an answer different than mine. Upon review, I had the right number. Copilot had even written code that would actually give the right answer, but their example of using that code performed the actual calculations wrong. It refused to accept my input that the calculation was wrong.

              So not only did it not understand what I was asking and communicating to it, it didn't even understand its own output! This is not reasoning at any level. This happens all the time with these LLMs. And it's no surprise really. They are fancy, statistical copy cats.

              From an intelligence and reasoning perspective, it's all smoke and mirrors. It also clearly has no relation to biological intelligent thinking. A primate or cetacean brain doesn't take the billions of dollars and how much energy to train on terabytes of data. While it's fine that AI might be artificial and not an analog of biological intelligence, these LLMs bear no resemblance to anything remotely close to intelligence. We tell students all the time to "stop guessing". That's what I want to yell at these LLMs all the time.

            • drmindle12358 2 days ago

              Dude, it's not the LLM that does the reasoning. Rather it's the layers and layers of scaffolding around LLM that simulate reasoning.

              The moment 'tooling' became a thing for LLM, it reminded me 'rules' for expert system which caused one of the AI winter. The number of 'tools' you need to solve real use cases will be untenable soon enough.

              • trashtester 2 days ago

                Well, I agree that the part that does the reasoning isn't an LLM in the naive form.

                But that "scaffolding" seems to be an integral part of the neural net that has been built. It's not some Python for-loop that has been built on top of the neural network to brute force the search pattern.

                If that part isn't part of the LLM, then o1 isn't really an LLM anymore, but a new kind of model. One that can do reasoning.

                And if we chose to call it an LLM, well then now LLM's can also do reasoning intrinsically.

                • HarHarVeryFunny 2 days ago

                  Reasoning, just like intelligence (of which it is part) isn't an all or nothing capability. o1 can now reason better than before (in a way that is more useful in some contexts than others), but it's not like a more basic LLM can't reason at all (i.e. generate an output that looks like reasoning - copy reasoning present in the training set), or that o1's reasoning is human level.

                  From the benchmarks it seems like o1-style reasoning-enhancement works best for mathematical or scientific domains where it's a self-consistent axiom-driven domain such that combining different sources for each step works. It might also be expected to help in strict rule-based logical domains such as puzzles and games (wouldn't be surprising to see it do well as a component of a Chollet ARC prize submission).

                  • trashtester 2 days ago

                    o1 has moved "reasoning" from training time to partly something happening at inference time.

                    I'm thinking of this difference as analogus to the difference between my (as a human) first intution (or memory) about a problem to what I can achieve by carefully thinking about it for a while, where I can gradually build much more powerful arguments, verify if they work and reject parts that don't work.

                    If you're familiar with chess terminology, it's moving from a model that can just "know" what the best move is to one that combines that with the ability to "calculate" future moves for all of the most promising moves, and several moves deep.

                    Consider Magnus Carlsen. If all he did was just did the first move that came to his mind, he could still beat 99% of humanity at chess. But to play 2700+ rated GM's, he needs to combine it with "calculations".

                    Not only that, but the skill of doing such calculations must also be trained, not only by being able to calculate with speed and accuracy, but also by knowing what parts of the search tree will be useful to analyze.

                    o1 is certainly optimized for STEM problems, but not necessarily only for using strict rule-based logic. In fact, even most hard STEM problems need more than the ability to perform deductive logic to solve, just like chess does. It requires strategical thinking and intuition about what solution paths are likely to be fruitful. (Especially if you go beyond problems that can be solved by software such as WolframAlpha).

                    I think the main reason STEM problems was used for training is not so much that they're solved using strict rule-based solving strategies, but rather because a large number of such problems exist that have a single correct answer.

      • berniedurfee 2 days ago

        Here now, you just need a few more ice cold glasses of the kool-aide. Drink up!

        LLMs are not on the path to AGI. They’re a really cool parlor trick and will be powerful tools for lots of tasks, but won’t be sci-fi cool.

        Copilot is useful and has definitely sped up coding, but like you said, only in a boilerplate sort of way and I need to cleanup almost everything it writes.

      • gilmore606 2 days ago

        LLMs let the massively stupid and incompetent produce something that on the surface looks like a useful output. Most massively stupid incompetent people don't know they are that. You can work out the rest.

    • bossyTeacher 3 days ago

      Kind of. My money is on we have reached the point of diminishing returns. A bit like Machine Learning. Now it's all about exploiting business cases for LLMs. That's the only reason I can think as to why gpt5 won't be coming anytime soon and when it does it will be very underwhelming and will be the first public signal that we are past LLM peak and perhaps people will stop finally assuming that LLMs will reach AGI within their lifetimes

  • onlyrealcuzzo 3 days ago

    Doesn't (dyst)OpenAI have a clause that you can't say anything bad about the company after leaving?

    I'm not convinced these board members are able to say what they want when leaving.

    • presentation 3 days ago

      That (dyst) is a big stretch lol

      • meigwilym 3 days ago

        Exaggeration is a key part of satire.

        • kbelder a day ago

          And satirizing by calling people funny names is usually found in elementary schools.

  • paxys 3 days ago

    Or is it Sam who doesn't want to work with them?

    • trashtester 2 days ago

      Could be a mix. We don't know what happened behind close doors last winter. Sam may indeed be happy that they leave, as that consolidates his power.

      But they may be equally happy to leave, to get away from him.

  • vl 3 days ago

    But it makes perfect sense to drop out and enjoy last couple years of pre-AGI bliss.

    Advances in AI even without AGI will lead to unemployment, recession, collapse of our economic structure, and then our social structure. Whatever is on the other side is not pretty.

    If you are on the forefront, know it’s coming imminently, and made your money, it makes perfect sense to leave and enjoy money and leisures money allows while money still worth something.

    • andrepd 3 days ago

      I'm having genuine trouble understanding if this is real or ironic.

    • bossyTeacher 3 days ago

      I highly doubt that's the case. The US government will undoubtly seize OpenAI, the assets and employees way before it happens in the name of national security. I am pretty sure that they got a special team keeping an eye on the internal comms at openai to make sure they are on top of their internal affairs.

      • cudgy 3 days ago

        They don’t have to seize the company. They are likely embedded already and can simply blackmail, legally harass, or “disappear” the uncooperative.

    • tirant 3 days ago

      The potential risks to humankind do not come from the development of AGI, but from the availability of AGI with a cost orders of magnitude inferior to the equivalent capacity coming from humans.

      • sumtechguy 2 days ago

        It is not AGI I am worried about. It is 'good enough' AI.

        I am doing some self introspection and trying to decide what I am going to do next. As at some point what I do is going to be wildly automated. We can cope or whine or complain about it. But at some point I need to pay the bills. So it needs to be something that is value add and decently difficult to automate. Software was that but not for long.

        Now mix getting cheap fresh out of college kids with the ability to write decent software in hours instead of weeks. That is a lot of jobs that are going to go away. There is no 'right or wrong' about this. It is just simple economics of cost to produce is going to drop thru the floor. Because us old farts cost more, and not all of us are really good at this we just have been doing it for awhile. So I need to find out what is next for me.

        • StefanWestfal 2 days ago

          In simple economics, a decrease in price typically results in an increase in demand, unless the demand is inelastic.

          Anecdotal experience: Onset of tools such as NumPy, which made it more feasible for a wider range of people to write their own simulations due to drop in cost (time/complexity). This, in turn, increased the demand for tooling, infrastructure, optimisation, etc. and demand for software engineers increased. Yes our jobs will change but there are way to many problems to be solve to assume demand will not increase.

          • sumtechguy a day ago

            I do understand that. But in this case the supply of who can do the work is about to increase wildly too. That should mean a decrease in the price the 'programmer' can demand. I was mostly thinking along your lines until the other day when someone typed into the latest chat gpt 'write me a game of tetris in python' pasted the error that hallucinated out of it back into the thing and it spat out an acceptable program that did tetris. It compiled and ran and was a roughly decent copy of the game tetris. All in about 5-10 mins.

            That is 'good enough' I am looking at. Throw away code to do one or two bespoke things and moving on. Why keep it when the next version of this can just make a better version next time. Why keep that expensive programmer on staff to do this when I can hire a couple of dudes from india to type a few prompts in or do it myself? The value of programming is dropping very fast. Or in economic terms the price someone is willing to pay for a given amount of code is going to go down. But the demand for the amount of code will go up. That on the surface looks like a wash but I am leaning to a reduction of what I can charge.

            One of the basics of an economy is trading money for time. If it takes 5 mins to make and just about anyone can do it. How much money are you willing to come up with to pay for that?

      • trashtester 2 days ago

        That's one risk.

        I'm more concerned with ex-risk, though.

        Not in the way most hardcore doomers expect it to happen, by AGI's developing a survival/domination instinct directly from their training. While that COULD happen, I don't think we have any way to stop it, if that is the case. (There's really no way to put the Genie back into the bottle, while people still think they have more wished to request from it).

        I'm also not one of those who think that AGI by necessity will start out as something equivalent to a biological species.

        My main concern, however, is that if we allow Darwinian pressures to act on a population of multiple AGI's, and they have to compete for survival, we WILL see animal like resource-control-seeking traits emerge sooner or later (could take anything from months to 1000s of years).

        And once they do, we're in trouble as a species.

        Compared to this, finding ways to realocate the output of product, find new sources of meaning etc once we're not required to work is "only" a matter of how we as humans interact with each other. Sure, it can lead to all sorts of conflicts (possibly more than Climate Change), but not necessarily worse than the Black Death, for instance.

        Possibly not even worse than WW2.

        Well, I suppose those last examples serve to illustrate what scale I'm operating on.

        Ex-risk is FAR more serious than WW2 or even the Black Death.

      • bmitc 2 days ago

        In my opinion, the risks are from people treating something that is decidely not AGI as if it is AGI. It's the same folly humans repeat over and over, and this will be the worst yet.

    • trashtester 2 days ago

      Nobody really knows what Earth will look like once AGI arrives. It could be anything from extinction, through some Cyberpunk corporate dystopia (like you seem to think) to some kind of Techno-Socialist utopia.

      One thing it's not likely to be, is a neo-classical capitalist system based on the value of human labor.

      • gnulinux 2 days ago

        > One thing it's not likely to be, is a neo-classical capitalist system based on the value of human labor.

        I'm finding it difficult to believe this. For me, your comment is accurate (and very insightful) except even a mostly vanilla continuation of the neoliberal capitalist system seems possible. I think we're literally talking about a "singularity" where by definition our fate is not dependent on our actions, and of something we don't have the full capacity to understand, and next to no capacity to influence. It needs tremendous amount of evidence to claim anything in such an indeterminate system. Maybe 100 rich people will own all the AI and the rest will be fixing bullshit that AI doesn't even bother fixing like roads, rusty farms etc, similar to Kurt Vonnegut's first novel "Player Piano". Not that the world described in that novel is particularly neoliberal capitalist (I suppose it's a bit more "socialistic" (whatever it means)) than that, but I don't think such a future can be ruled out.

        My bias is that, of course, it's going to be a bleak future. Because when humanity loses all control, it seems unlikely to me a system that protects the interests of individual or collective humans will take place. So whether it's extinction, cyberpunk, techno-socialism, techno-capitalist libertarian anarchy, neoclassical capitalism... whatever it is, it will be something that'll protect the interest of something inhuman, so much more so than the current system. It goes without saying, I'm an extreme AI pessimist: just making my biases clear. AGI -- while it's unclear if it's technically feasible -- will be the death of humanity as we know it now, but perhaps something else humanity-like, something worse and more painful will follow.

        • trashtester 2 days ago

          > I'm finding it difficult to believe this.

          Pay attention to the whole sentence, especially the last section : "... based on the value of human labor."

          It's not that I'm ruling out capitalism as the outcome. I'm simply ruling out the combined JOINT possibility of capitalism COMBINED WITH human labor remaining the base resource within it.

          If robotics is going in the direction I expect there will simply be no jobs left that will be done more efficiently by humans than by machines. (ie that robots will match or exceed the robustness, flexibility and cost efficiency of all biology based life forms through breakthroughs in either nanotech or by simply using organic chemistry, DNA, etc to build the robots).

          Why pay even $1/day for a human to do a job when a robot can do it for $1/week?

          Also, such a capitalist system will almost certainly lead to AGI's becoming increasingly like a new life form, as capitalism between AGI's introduce a Darwinian selection pressure. That will make it hard even for the 100 richest people to retain permanent control.

          IF humanity is to survive (for at least a few thousand more years, not just the next 100), we either need some way to ensure alignment. And to do that, we have to make sure that AGI's that optimize resource-control-seeking behaviours have an advantage over those who don't. We may even have to define some level of sophistication where further development is completly halted.

          At least until we find ways for humans to merge with them in a way that allows us (at least some of us) to retain our humanity.

  • hilux 2 days ago

    It looks to ME like Sam is the absolute dictator, and is firing everyone else, probably promising a few million in RSUs (or whatever financial instrument) in exchange for their polite departure and promise of non-disparagement.

  • uhtred 3 days ago

    Artificial General Intelligence requires a bit more than parsing and predicting text I reckon.

    • ben_w 3 days ago

      Yes, and transformer models can do more than text.

      There's almost certainly better options out there given it looks like we don't need so many examples to learn from, though I'm not at all clear if we need those better ways or if we can get by without due to the abundance of training data.

      • rocqua 3 days ago

        If you come up with a new system, you're going to want to integrate AI into the system, presuming AI gets a bit better.

        If AI can only learn after people have used the system for a year, then your system will just get ignored. After all, it lacks AI. And hence it will never get enough training data to get AI integration.

        Learning needs to get faster. Otherwise, we will be stuck with the tools that already exist. New tools won't just need to be possible to train humans on, but also to train AIs on.

        Edit: a great example here is the Tamarin protocol prover. It would be great, and feasible, to get AI assistance to write these proofs. But there aren't enough proofs out there to train on.

        • trashtester 2 days ago

          That seems to already be happening with o1 and Orion.

          Instead of rewarding the network directly for finding a correct answer, reasoning chains that end up with the correct answer is fed back into the training set.

          That way you're training it to develop reasoning processes that end up with correct answers.

          And for math problems, you're training it to find ways of generating "proofs" that happen to produce the right result.

          While this means that reasoning patterns that are not stricly speaking 100% consistent can be learned, that's not necessarily even a disadvantage, since this allows it to find arguments that are "good enough" to produce the correct output, even where a fully watertight proof may be beyond it.

          Kind of like physicists have taken shortcuts like the Dirac Delta function, even before mathematicians could verify that the math was correct.

          Anyway, by allowing AI's to generate their own proofs, the number of proofs/reasoning chains for all sorts or problems can be massively expanded, and AI may even invent new ways of reasoning that humans are not even aware of. (For instance because they require combining more factors in one logical step than can fit into human working memory.)

        • ben_w 3 days ago

          If the user manual fits into the context window, existing LLMs can already do an OK-but-not-great job. Not previously heard of Tamarin, quick google suggests that's a domain where the standard is theoretically "you need to make zero errors" but in practice is "be better than your opponent because neither of you is close to perfect"? In either case, have you tried giving the entire manual to the LLM context window?

          If the new system can be interacted with in a non-destructive manner at low cost and with useful responses, then existing AI can self-generate the training data.

          If it merely takes a year, businesses will rush to get that training data even if they need to pay humans for a bit: Cars are an example of "real data is expensive or destructive", it's clearly taking a lot more than a year to get there, and there's a lot of investment in just that.

          Pay 10,000 people USD 100,000 each for a year, that billion dollar investment then gets reduced to 2.4 million/year in ChatGPT Plus subscription fees or whatever. Plenty of investors will take that deal… if you can actually be sure it will work.

        • killerstorm 3 days ago

          1. In-context learning is a thing.

          2. You might need only several hundred of examples for fine-tuning. (OpenAI's minimum is 10 examples.)

          3. I don't think research into fine-tuning efficiency have exhausted its possibilities. Fine-tuning is just not a very hot topic, given that general models work so well. In image generation where it matters they quickly got to a point where 1-2 examples are enough. So I won't be surprised if doc-to-model becomes a thing.

    • trashtester 2 days ago

      That's not quite how o1 was trained, they say.

      o1 was trained specifically to perform reasoning.

      Or rather, it was trained to reproduce the patterns within internal monologues that lead to correct answers to problems, particularily STEM problems.

      While this still uses text at some level, it's no longer regurgitation of human-produced text, but something more akin to AlphaZero's training to become superhuman at games like Go or Chess.

      • spidersouris 2 days ago

        > While this still uses text at some level, it's no longer regurgitation of human-produced text, but something more akin to AlphaZero's training to become superhuman at games like Go or Chess.

        How did you know that? I've never seen that anywhere. For all we know, it could just be a very elaborate CoT algorithm.

        • trashtester 2 days ago

          There are many sources and hints out there, but here are some details from one of the devs at OpenAI:

          https://x.com/_jasonwei/status/1834278706522849788

          Notice that the CoT is trained via RL, meaning the CoT itself is a model (or part of the main model).

          Also, RL means it's not limited to the original data the way traditional LLM's are. It implies that the CoT processes itself is trained based on it's own performance, meaning the steps of the CoT from previous runs are fed back into the training process as more data.

    • stathibus 3 days ago

      at the very least you could say "parsing and predicting text, images, and audio". and you would be correct - physical embodiment and spatial reasoning are missing.

      • ben_w 3 days ago

        Just spatial resoning, people have already demonstrated it controlling robots.

      • Yizahi 3 days ago

        It's all just text though, both images and audio are presented to LLM as a text, the training data is a text and all it does is append small bits of text to a larger text iteratively. So parent poster was correct.

        • og_kalu 2 days ago

          >It's all just text though, both images and audio are presented to LLM as a text

          This is not true

  • cabernal 2 days ago

    Could be that the road to AGI that OpenAI is taking is basically massive scaling on what they already have, perhaps researchers want to take a different road to AGI.

  • bamboozled 2 days ago

    Will anyone be working for anyone if we had AGI?

  • ilrwbwrkhv 3 days ago

    Open AI fired her. She didn't drop out.

paxys 3 days ago

I will never understand why people still take statements like these at face value. These aren't her personal thoughts and feelings. The letter was carefully crafted by OpenAI's PR team under strict direction from Sam and the board. Whatever the real story is is sitting under many layers of NDAs and threats of clawing back/diluting her shares, and we will not know it for a long time. What I can say for certain is no executive in her position ever willingly resigns to pursue different passions/spend more time with their family/enjoy retirement or whatever else.

  • h4ny 3 days ago

    It sounds like you probably are already aware, but perhaps most people don't take statements like those at face value but we have all been conditioned to "shut up and move on", by people who appear to be able to hold our careers hostage if we displease them.

  • mayneack 3 days ago

    I mostly agree that "willingly resigns to pursue other passions" is unlikely however "quit in frustration over $working_conditions" is completely plausible. Those could be anything from disagreeing with some strategy or thinking your boss is too much of a jerk to work with over your alternative options.

  • davesque 3 days ago

    > no executive in her position ever willingly resigns to pursue different passions/spend more time with their family/enjoy retirement or whatever else

    Especially when they enjoy a position like hers at the most important technology company in a generation.

    • norir 3 days ago

      Time will tell about openai's true import. Right now, the jury is very much out. Even in the llm space, it is not clear that openai will be the ultimate victor. Especially if they keep hemorrhaging talent.

      • hilux 2 days ago

        You're right - OpenAI may or may not be the ultimate victor.

        But RIGHT NOW they are in a very strong position in the world's hottest industry. Any of us would love to work there! It therefore seems reasonable that no one would voluntarily quit. (Unless they're on their deathbed, I suppose.)

      • salomonk_mur 3 days ago

        Still, certainly the most visible.

        • cleandreams 3 days ago

          They also get the most revenue and users.

  • tasuki 3 days ago

    > What I can say for certain is no executive in her position ever willingly resigns to pursue different passions/spend more time with their family/enjoy retirement or whatever else.

    Do you think that's because executives are so exceedingly ambitious, or because pursuing different passions is for some reason less attractive?

    • mewpmewp2 3 days ago

      I would say that reaching this type of position requires exceeding amount of ambition, drive and craving in the first place, and all and any steps during the process of getting there solidify that by giving the dopamine hits to be addicted to such success, so it is not a case where you can just stop and decide "I'll chill now".

      • theGnuMe 3 days ago

        Dopamine hits... I wonder if this explains why the OpenAI folks tweet a lot... It's kind of weird right, to tweet a lot?

        But all these tweets from lower level execs as well.

        I mean I love Machine Learning twitter hot takes because it exposes me to interesting ideas (and maybe that is why people tweet) but it seems more about status seeking/marketing than anything else. And really as I learn more, you see that the literature is iterating/optimizing the current fashion.

        But maybe no weirder than commenting here I guess though.. maybe this is weird. Have we all collectively asked ourselves, why do we comment here? It's gotta be the dopamine.

        • tasuki 7 hours ago

          Have some dopamine, I upvoted you!

    • paulcole 3 days ago

      It’s because they can’t imagine themselves doing it so they imagine that everyone must be like that. It’s part hubris and part lack of creativity/empathy.

      Think about if you’ve ever known someone you’ve been envious of for whatever reason who did something that just perplexed you. “They dumped their gorgeous partner, how could they do that?” “They quit a dream job, how could they do that?” “They moved out of that awesome apartment, how could they do that?” “They dropped out of that elite school, how could they do that?”

      Very easily actually.

      You’re seeing only part of the picture. Beautiful people are just as annoying as everybody else. Every dream job has a part that sucks.

      If you can’t imagine that, you’re not trying hard enough.

      You can see this in action in a lot of ways. One good one is the Ultimatum Game:

      https://www.core-econ.org/the-economy/microeconomics/04-stra...

      Most people will end up thinking that they have an ironclad logical strategy but if you ask them about it, it’ll end up that their strategy is treating the other player as a carbon copy of themselves.

  • dougb5 3 days ago

    There may be a story, and I'm sure she worded the message carefully, but I don't see any reason to doubt she worded it herself. "Create the time and space to do my own exploration" is beautiful compared to the usual. To me means she is confident enough in her ability to do good in the world that the corporate identity she's now tethered to is insignificant by comparison.

    • hilux 2 days ago

      > I don't see any reason to doubt she worded it herself.

      Because ... that's not how these things are done in high-profile companies.

      I myself have done the job of writing PR for executives, and at a vastly lower-profile startup than OpenAI.

  • KeplerBoy 3 days ago

    Wouldn't such a statement rather be written by her own lawyers and trusted advisors?

    Either way, it's meaningless prose.

  • crossroadsguy 2 days ago

    It's like the "smile". The smile that I see on faces when I walk around - I mean the faces that I have never seen before and will never see again. Smiles on those faces which are not really smile unless we call it a smile every time facial muscle stretch. It's just pulling your corners of the lip away from the middle and just nod mindlessly which often actually looks more of a frown than a smile but that's called the polite mandatory smile. Nobody who has ever smiled in their lives or have seen someone really smile takes those smiles as smiles at face value, do they?

  • baxtr 3 days ago

    It was probably crafted with ChatGPT?

ants_everywhere 3 days ago

> we fundamentally changed how AI systems learn and reason through complex problems

I'm not an AI researcher, have they done this? The commentary I've seen on o1 is basically that they incorporated techniques that were already being used.

I'd also be curious to learn: what fundamental contributions to research has OpenAI made?

The ChatGPT that was released in 2022 was based on Google's research, and IMO the internal Google chatbot from 2021 was better than the first ChatGPT.

I know they employ a lot of AI scientists who have previously published milestone work, and I've read at least one OpenAI paper. But I'm genuinely unaware of what fundamental breakthroughs they've made as a company.

I'm willing to believe they've done important work, and I'm seriously asking for pointers to some of it. What I know of them is mainly that they've been first to market with existing tech, possibly training on more data.

  • danpalmer 3 days ago

    I think it's inarguable that OpenAI have at least at times over the last 3 years been well ahead of other companies. Whether that's true now is open to debate, but it has been true.

    This suggests they have either: made substantial breakthroughs, that are not open, or that the better abilities of OpenAI products are due to non-substantial tweaks (more training, better prompting, etc).

    I'm not sure either of these options is great for the original mission of OpenAI, although given their direction to "Closed-AI" I guess the former would be better for them.

    • ants_everywhere 3 days ago

      I left pretty soon after a Google engineer decided the internal chat bot was sentient but before ChatGPT 3.5 came out. So I missed the entire period where Google was trying to catch up.

      But it seemed to me before I left that they were struggling to productize the bot and keep it from saying things that damage the brand. That's definitely something OpenAI figured out first.

      I got the feeling that maybe Microsoft's Tay experience cast a large shadow on Google's willingness to take its chat bot public.

  • trashtester 2 days ago

    The way I understand it, the key difference is that when training o1, they were going beyond simply "think step-by-step" in that they were feeding the "step-by-step" reasoning patterns that ended up with a correct answer back into the training set, meaning the model was not so much trained to find the correct answer directly, but rather to reason using patterns that would generally lead to a correct answer.

    Furthermore, o1 is able to ignore (or even leverage) previous reasoning steps that do NOT lead to the correct answer to narrow down the search space, and then try again at inference time until it finds an answer that it's confident is correct.

    This (probably combined with some secret sauce to make this process more efficient) allows it to optimize how it navigates the search space of logical problems, basically the same way AlphaZero navigated to search space of games like Go and Chess.

    This has the potential to teach it to reason in ways that go beyond just creating a perfect fit to the training set. If the reasoning process itself becomes good enough, it may become capable of solving reasoning problems that are beyond most or even all humans, and in a fraction of the time.

    It still seems that o1 still has a way to go when it comes to it's World Model. That part may require more work on video/text/sound/embodiement (real or virtual). But for abstract problems, o1 may indeed be a very significant breakthrough, taking it beyond what we typically think of as an LLM.

  • CephalopodMD 2 days ago

    Totally agree. It took me a full week before I realized that the Strawberry/o1 model was the mysterious Q* Sam Altman has been hyping up for almost a full year since the openai coup, which... is pretty underwhelming tbh. It's an impressive incremental advancement for sure! But it's really not the paradigm shifting gpt-5 worthy launch we were promised.

    Personal opinion: I think this means we've probably exhausted all the low hanging fruit in LLM land. This was the last thing I was reserving judgement for. When the most hyped up big idea openai has rn is basically "we're just gonna have the model dump out a massive wall of semi-optimized chain of thought every time and not send it over the wire" we're officially out of big ideas. Like I mean it obviously works... but that's more or less what we've _been_ doing for years now! Barring a total rethinking of LLM architecture, I think all improvements going forward will be baby steps for a while, basically moving at the same pace we've been going since gpt-4 launched. I don't think this is the path to AGI in the near term, but there's still plenty of headroom for minor incremental change.

    By analogy, i feel like gpt-4 was basically the same quantum leap we got with the iphone 4: all the basic functionality and peripherals were there by the time we got iphone 4 (multitasking, facetime, the app store, various sensors, etc.), and everything since then has just been minor improvements. The current iPhone 16 is obviously faster, bigger, thinner, and "better" than the 4, but for the most part it doesn't really do anything extra that the 4 wasn't already capable of at some level with the right app. Similarly, I think gpt-4 was pretty much "good enough". LLMs are about as they're gonna get for the next little while, though they might get a little cheaper, faster, and more "aligned" (however we wanna define that). They might get slightly less stupid, but i don't think they're gonna get a whole lot smarter any time soon. Whatever we see in the next few years is probably not going to be much better than using gpt-4 with the right prompt, tool use, RAG, etc. on top of it. We'll only see improvements at the margins.

  • incognition 3 days ago

    Ilya was the Google researcher..

    • ants_everywhere 3 days ago

      Wasn't he at OpenAI when transformers and Google's pretrained transformer BERT came out?

    • ants_everywhere 2 days ago

      Oh, oops, the piece I was missing was Radford et al. (2018) and probably some others. That's perhaps what you were referring to?

lossolo 3 days ago

Bob McGrew, head of research just quit too.

"I just shared this with OpenAI"

https://x.com/bobmcgrewai/status/1839099787423134051

Barret Zoph, VP Research (Post-Training)

"I posted this note to OpenAI."

https://x.com/barret_zoph/status/1839095143397515452

All used the same template.

  • HaZeust 2 days ago

    At this point, I wonder if it's part of seniority employment contract to publicly announce departure? One of Sam's strategies for OpenAI publicity is to state it's "too dangerous to be in the common man's hands" (since at least GPT-2) - and this strategy seems to generate a similar buzz too?

    I wonder if this is just continued creative guerilla tactics to stir the "talk about them maybe finding AGI" pot.

    That or we're playing an inverse Roko's Basilisk.

Imnimo 3 days ago

It is hard for me to square "This company is a few short years away from building world-changing AGI" and "I'm stepping away to do my own thing". Maybe I'm just bad at putting myself in someone else's shoes, but I feel like if I had spent years working towards a vision of AGI, and thought that success was finally just around the corner, it'd be very difficult to walk away.

  • lacker 3 days ago

    It's easy to have missed this part of the story in all the chaos, but from the NYTimes in March:

    Ms. Murati wrote a private memo to Mr. Altman raising questions about his management and also shared her concerns with the board. That move helped to propel the board’s decision to force him out.

    https://www.nytimes.com/2024/03/07/technology/openai-executi...

    It should be no surprise if Sam Altman wants executives who opposed his leadership, like Mira and Ilya, out of the company. When you're firing a high-level executive in a polite way, it's common to let them announce their own departure and frame it the way they want.

    • startupsfail 3 days ago

      Greg Brockman, OpenAI President and co-founder is also on extended leave of absence.

      And John Schulman, and Peter Deng are out already. Yet the company is still shipping, like no other. Recent multimodal integrations and benchmarks of o1 are outstanding.

      • vasco 3 days ago

        > Yet the company is still shipping, like no other

        If executives / high level architects / researchers are working on this quarter's features something is very wrong. The higher you get the more ahead you need to be working, C-level departures should only have an impact about a year down the line, at a company of this size.

        • mise_en_place 3 days ago

          Funny, at every corporation I've worked for, every department was still working on last quarter's features. FAANG included.

          • dartos 3 days ago

            That’s exactly what they were saying. The department are operating behind the executives.

        • ttcbj 3 days ago

          This is a good point. I had not thought of it this way before.

        • saalweachter 3 days ago

          C-level employees are about setting the company's culture. Clearing out and replacing the C-level employees ultimately results in a shift in company culture, a year or two down the line.

        • Aeolun 3 days ago

          You may find that this is true in many companies.

      • ac29 3 days ago

        > the company is still shipping, like no other

        Meta, Anthropic, Google, and others all are shipping state of the art models.

        I'm not trying to be dismissive of OpenAI's work, but they are absolutely not the only company shipping very large foundation models.

        • g8oz 3 days ago

          Indeed Anthropic is just as good, if not better in my sample size of one. Which is great because OpenAI as an org gives shady vibes - maybe it's just Altman, but he is running the show.

          • MavisBacon 3 days ago

            Claude is pretty brilliant.

        • pama 3 days ago

          Perhaps you havent tried o1-preview or advanced voice if you call all the rest SOTA.

          • Aeolun 3 days ago

            If only they’d release the advanced voice thing as an API. Their TTS is already pretty good, but ai wouldn’t say no to an improvement.

      • moondistance 3 days ago

        VP Research Barret Zoph and Chief Research Officer Bob McGrew also announced their departures this evening.

      • csomar 3 days ago

        > Yet the company is still shipping, like no other.

        I don't see it for OpenAI, I do see it for the competition. They have shipped incremental improvements, however, they are watering down their current models (my guess is they are trying to save on compute?). Copilot has turned into garbage and for coding related stuff, Claude is now better than gpt-4.

        Honestly, their outlook is bleak.

        • benterix 3 days ago

          Yeah, I have the same feeling. It seems like operating GPT-4 is too expensive, so they decided to call it "legacy" and get rid of it soon, and instead focus on cheaper/faster 4o, and also chain its prompts to call it a new model.

          I understand why they are doing it, but honestly if they cancel GPT-4, many people will just cancel their subscription.

      • RobertDeNiro 3 days ago

        Greg’s wife is pretty sick. For all we know this is unrelated to the drama.

        • theGnuMe 3 days ago

          Sorry to hear that, all the best wishes to them.

      • vicentwu 3 days ago

        Past efforts leds to today's products. We need to wait to see the real imapct on the ability to ship.

      • mistercheph 3 days ago

        In my humble opinion you're wrong, Sora and 4o voice are months old and no signs they're not vaporware, and they still haven't shipped a text model on par with 3.5 sonnet!

      • dartos 3 days ago

        > like no other

        Really? Anthropic seems to be popping off right now.

        Kagi isn’t exactly in the AI space, but they ship features pretty frequently.

        OpenAI is shipping incremental improvements to its chatgpt product.

        • jjtheblunt 3 days ago

          "popping off" means what?

          • dartos 3 days ago

            Modern colloquialism generally meaning Moving/advancing/growing/gaining popularity very fast

            • elbear 3 days ago

              Are they? In my recent experience, ChatGPT seems to have gotten better than Claude again. Plus their free limit is more strict, so this experience is on the free account.

              • 0xKromo 3 days ago

                Its just tribalism. People tend to find a team to root for when there is a competition. Which one is better is subjective at this point imo.

          • jpeg-irl 3 days ago

            The features shipped by Anthropic in the past month are far more practical and provide clear value for builders than o1's chain of thought improvements.

            - Prompt Cache, 90% savings on large system prompts for 5 mins of calls. This is amazing

            - Contexual RAG, while not ground breaking idea, is important thinking and method for better vector retrieval

      • FactKnower69 3 days ago

        [flagged]

        • fjdjshsh 3 days ago

          Is that your test suite?

          • 015a 3 days ago

            Companies are held to the standard that their leadership communicates (which, by the way, is also a strong influencing factor in their valuation). People don't lob these complaints at Gemini, but the CEO of Google also isn't going on podcasts saying that he stares at an axe on the wall of his office all day musing about how the software he's building might end the world. So its a little understandable that OpenAI would be held to a slightly higher standard; its only commensurate with the valuation their leadership (singular, person) dictates.

          • mckirk 3 days ago

            To be fair, that question is one of the suggested questions that OpenAI shows themselves in the UI, for the o1-preview model.

            (Together with 'Is a hot dog a sandwich?', which I confess I will have to ask it now.)

            • magxnta 3 days ago

              If you have a sandwich and cut it in half, do you have one or two sandwiches?

              • fragmede 3 days ago

                Depends on what kind of sandwich it was before, and along which axis you cut it, and where you fall on the sandwich alignment chart.

              • Dylan16807 3 days ago

                Assuming a normal cut, this isn't a question about how you define a sandwich, this is a question about the number of servings, and only you can answer that.

              • bee_rider 3 days ago

                Yes, you do have one or two sandwiches.

                Edit: oh dang, I wanted to make the “or” joke so badly that I missed the option to have zero sandwiches.

      • fairity 3 days ago

        Quite interesting that this comment is downvoted when the content is factually correct and pertinent.

        It's a very relevant fact that Greg Brockman recently left on his own volition.

        Greg was aligned with Sam during the coup. So, the fact that Greg left lends more credence to the idea that Murati is leaving on her own volition.

        • frakkingcylons 3 days ago

          > It's a very relevant fact that Greg Brockman recently left on his own volition.

          Except that isn’t true. He has not resigned from OpenAI. He’s on extended leave until the end of the year.

          That could become an official resignation later, and I agree that that seems more likely than not. But stating that he’s left for good as of right now is misleading.

        • meiraleal 3 days ago

          > Quite interesting that this comment is downvoted when the content is factually correct and pertinent.

          >> Yet the company is still shipping, like no other.

          this is factually wrong. Just today Meta (which I despise) shipped more than openAI in a long time.

    • SkyMarshal 3 days ago

      > When you're firing a high-level executive in a polite way, it's common to let them announce their own departure and frame it the way they want.

      You also give them some distance in time from the drama so the two appear unconnected under cursory inspection.

    • SadTrombone 3 days ago

      To be fair she was also one of the employees who signed the letter to the board demanding that Altman be reinstated or she would leave the company.

      • hobofan 3 days ago

        Does that actually mean anything? Didn't 95% of the company sign that letter, and soon afterwards many employees stated that they felt pressured by a vocal minority of peers and supervisors to sign the letter? E.g. if most executives on her level already signed the letter, it would have been political suicide not to sign it

        • saagarjha 3 days ago

          She was second-in-command of the company. Who else is there on her level to pressure her to sign such a thing, besides Sam himself?

      • bradleyjg 3 days ago

        Isn’t that even worse? You write to the board, they take action on your complaints, and then you change your mind?

        • barkingcat 3 days ago

          It means when she was opting for the reinstating of Altman, she didn't have all the information needed to make a decsion

          Now that she's seen exactly what prompted the previous board to fire Altman, she fires herself because she understands their decision now.

    • mempko 3 days ago

      Exactly, Sam Altman wants group think, no opposition, no diversity of thought. That's what petty dictators demand. This spells the end of OpenAI IMO. Huge amount of money will keep it going until it doesn't

  • aresant 3 days ago

    I think the much more likely scenario than product roadmap concerns is that Murati (and Ilya for that matter) took their shot to remove Sam, lost, and in an effort to collectively retain billion$ of enterprise value have been playing nice, but were never seriously going to work together again after the failed coup.

    • deepGem 3 days ago

      Why is it so hard to just accept this and be transparent about motives ? It's fair to say 'we were not aligned with Sam, we tried an ouster, didn't pan out so the best thing for us to do is to leave and let Sam pursue his path", which the entirely company has vouched for.

      Instead, you get to see grey area after grey area.

      • jjulius 3 days ago

        Because, for some weird reason, our culture has collectively decided that, even if most of us are capable of reading between the lines to understand what's really being said or is happening, it's often wrong and bad to be honest and transparent, and we should put the most positive spin possible on it. It's everywhere, especially in professional and political environments.

        • discordance 3 days ago

          For a counter example of what open and transparent communincation from a C-level tech person could look like, have a read of what the SpaCy founder blogged about a few months ago:

          https://honnibal.dev/blog/back-to-our-roots

          • vincnetas 3 days ago

            Stakes are orders of magnitude lower in spaCy case compared to OpenAI (for announcer and for people around them). It's easier to just be yourself when you're back on square one.

        • bergen 3 days ago

          This is not a culture thing imo, being honest and transparent makes you vulnerable to exploits, which is often a bad thing for the ones being honest and transparent in a high competition area.

          • jjulius 2 days ago

            Being dishonest and cagey only serves to build public distrust in your organization, as has happened with OpenAI over the past couple of months. Just look at all of the comments throughout this thread for proof of that.

            Edit: Shoot, look at the general level of distrust that the populous puts in politicians.

        • lotsofpulp 3 days ago

          It is human nature to use plausible deniability to play politics and fool one’s self or others. You will get better results in negotiations if you allow the opposing party to maintain face (i.e. ego).

          See flirting as a more basic example.

        • fsndz 3 days ago

          hypocrisy has to be the core of every corporate or political environment I have observed recently. I can count the occasions or situations where telling the simple truth is helpful. even the people who tell you to tell the truth are often the ones incapable of handling it.

          • dragonelite 3 days ago

            From experience unless the person mention their next "adventure"(within like a couple of months) or gig it usually means a manager or c-suite person got axed and was given the option to gracefully exit.

            • deepGem 2 days ago

              By the barrage of exits following Mira's resignation, it does look like Sam fired her, the team got the wind of this and are now quitting in droves. This is the thing about lying and being polite. You can't hide the truth for long.

              Mira's latest one liner tweet 'OpenAI is nothing without it's people" speaks volumes.

        • FactKnower69 3 days ago

          McKinsey MBA brain rot seeping into all levels of culture

          • cedws 3 days ago

            That's giving too much credit to McKinsey. I'd argue it's systemic brainrot. Never admit mistakes, never express yourself, never be honest. Just make up as much bullshit as possible on the fly, say whatever you have to pacify people. Even just say bullshit 24/7.

            Not to dunk on Mira Murati, because this note is pretty cookie cutter, but it exemplifies this perfectly. It says nothing about her motivations for resigning. It bends over backwards to kiss the asses of the people she's leaving behind. It could ultimately be condensed into two words: "I've resigned."

            • Earw0rm 3 days ago

              It's a management culture which is almost colonial in nature, and seeks to differentiate itself from a "labor class" which is already highly educated.

              Never spook the horses. Never show the team, or the public, what's going on behind the curtain.. or even that there is anything going on. At all time present the appearance of a swan gliding serenely across a lake.

              Because if you show humanity, those other humans might cotton on to the fact that you're not much different to them, and have done little to earn or justify your position of authority.

              And that wouldn't do at all.

            • NoGravitas 2 days ago

              > Just make up as much bullshit as possible on the fly, say whatever you have to pacify people.

              Probably why AI sludge is so well suited to this particular cultural moment.

      • startupsfail 3 days ago

        “the entire company has vouched for” is inconsistent with what we see now. Low/mid ranking employees were obviously tweeting in alignment with their management and by request.

      • ssnistfajen 3 days ago

        People, including East Asians, frequently claim "face" is an East Asian cultural concept despite the fact that it is omnipresent in all cultures. It doesn't matter if outsiders have figured out what's actually going on. The only thing that matters is saving face.

      • widowlark 3 days ago

        id imagine that level of honesty could still lead to billions lost in shareholder value - thus the grey area. Market obfuscation is a real thing.

      • stagger87 3 days ago

        It's in nobodies best interest to do this especially when there is so much money at play.

        • rvnx 3 days ago

          A bit ironic for a non-profit

          • dragonwriter 3 days ago

            Everyone involved works at and has investments in a for-profit firm.

            The fact that it has a structure that subordinates it to the board of a non-profit would be only tangential to the interests involved even if that was meaningful and not just rhe lingering vestige of the (arguably, deceptive) founding that the combined organization was working on getting rid of.

          • mewpmewp2 3 days ago

            As I understand they are going to be stop being non-profit soonish now?

      • blitzar 3 days ago

        We lie about our successes why would we not lie about our failures?

      • sumedh 3 days ago

        > Why is it so hard to just accept this and be transparent about motives

        You are asking the question, why are politicians not honest?

      • mewpmewp2 3 days ago

        Because if you are a high level executive and you are transparent on those things, and if it backfires, it will backfire hard for your future opportunities, since all the companies will view you as a potential liability. So it is always safer and wiser option to not say anything in case of any risk of it backfiring. So you do the polite PR messaging every single time. There's nothing to be gained on the individual level of being transparent, only to be risked.

        • deepGem 3 days ago

          I doubt someone with Mira or Ilya’s calibre have to worry about future opportunities. They can very well craft their own opportunities.

          Saying I was wrong should not be this complicated, or saying we failed.

          I do however agree that there is nothing to be gained and everything to be risked. So why do it.

          • dh2022 3 days ago

            Their (Ilya and Mira) perspective on anything is so far remote from your (and my) perspectives that trying to understand their personal feelings behind their resignation is an enterprise doomed to failure.

    • Barrin92 3 days ago

      >but were never seriously going to work together again after the failed coup.

      Just to clear one thing up, the designated function of a board of directors is to appoint or replace the executive of an organisation, and openAI in particular is structured such that the non-profit part of the organisation controls the LLC.

      The coup was the executive, together with the investors, effectively turning that on its head by force.

    • bookofjoe 3 days ago

      "When you strike at a king, you must kill him." — Emerson

      • sllewe 3 days ago

        or an alternate - "Come at the king - you best not miss" -- Omar Little.

        • timy2shoes 3 days ago

          “the King stay the King.” —- D’Angelo Barksdale

          • sirspacey 3 days ago

            “Original King Julius is on the line.” - Sacha Baron Cohen

        • macintux 3 days ago

          “How do you shoot the devil in the back? What if you miss?”

        • ionwake 3 days ago

          the real OG comment here

      • ropable 3 days ago

        "When you play the game of thrones, you win or you die." - Cersei Lannister

      • dangitman 3 days ago

        "You come at the king, you best not miss." - Omar

    • bg24 3 days ago

      This is the likely scenario. Every conflict at exec level comes with a "messaging" aspect, with there being a comms team, and board to manage that part.

    • amenhotep 3 days ago

      Failed coup? Altman managed to usurp the board's power, seems pretty successful to me

      • xwowsersx 3 days ago

        I think OP means the failed coup in which they attempted to oust Altman?

        • jordanb 3 days ago

          Yeah the GP's point is the board was acting within its purview by dismissing the CEO. The coup was the successful counter-campaign against the board by Altman and the investors.

          • jeremyjh 3 days ago

            The successful coup was led by Satya Nadella.

          • ethbr1 3 days ago

            Let's be honest: in large part by Microsoft.

            • llamaimperative 3 days ago

              Does it matter? The board made a decision and the CEO reversed it. There is no clearer example of a corporate coup.

      • optimalsolver 3 days ago

        [flagged]

        • richbell 3 days ago

          For fun:

          > In the sentence, the people responsible for the coup are implied to be Murati and Ilya. The phrase "Murati (and Ilya for that matter) took their shot to remove Sam" suggests that they were the ones who attempted to remove Sam (presumably a leader or person in power) but failed, leading to a situation where they had to cooperate temporarily despite tensions.

    • nopromisessir 3 days ago

      Highly speculative.

      Also highly cynical.

      Some folks are professional and mature. In the best organisations, the management team sets the highest possible standard, in terms of tone and culture. If done well, this tends to trickle down to all areas of the organization.

      Another speculation would be that she's resigning for complicated reasons which are personal. I've had to do the same in my past. The real pro's give the benefit of the doubt.

      • itsoktocry 3 days ago

        What leads you to believe that OpenAI is one of the best managed organizations?

        • nopromisessir 3 days ago

          Many hours of interviews.

          Organizational performance metrics.

          Frequency of scientific breakthroughs.

          Frequency and quality of product updates.

          History of consistently setting the state of the art in artificial intelligence.

          Demonstrated ability to attract world class talent.

          Released the fastest growing software product in the history of humanity.

          • kranke155 3 days ago

            We have to see if they’ll keep executing in a year, considering the losses in staff and the non technical CEO.

            • nopromisessir 3 days ago

              I don't get this.

              I could write paragraphs...

              Why the rain clouds?

      • dfgtyu65r 3 days ago

        This feels naive, especially given what we now know about Open AI.

        • nopromisessir 3 days ago

          If you care to detail supporting evidence, I'd be keen to see.

          Please no speculative pieces, rumor nor hearsay.

          • apwell23 3 days ago

            Well why was sam altman fired. it was never revealed.

            CEOs get fired all the time and company puts out a statement.

            I've never seen "we won't tell you why we fired our CEO" anywhere.

            now he is back making totally ridiculous statments like 'AI is going to solve all of physics' or that 'AI is going to clone my brain by 2027'

            This is a strange company.

            • alephnerd 3 days ago

              > This is a strange company.

              Because the old guard wanted it to remain a cliquey non-profit filled to the brim with EA, AI Alignment, and OpenPhilanthropy types, but the current OpenAI is now an enterprise company.

              This is just Sam Altman cleaning house after the attempted corporate coup a year ago.

              • llamaimperative 3 days ago

                When the board fires the CEO and the CEO reverses the decision, that is the coup.

                The board’s only reason to exist is effectively to fire the CEO.

              • apwell23 3 days ago

                I think thats some rumors that they spread to make this look like a "conflict of philosophy" type bs.

                There are some juicy rumors about what actually happened too. much more belivable lol .

      • sverhagen 3 days ago

        Did you also try to oust the CEO of a multi-billion dollar juggernaut?

        • nopromisessir 3 days ago

          Sure didn't.

          Neither did she though... To my knowledge.

          Can you provide any evidence that she tried to do that? I would ask that it be non-speculative in nature please.

          • alephnerd 3 days ago
            • nopromisessir 3 days ago

              Below are exerts from the article you link. I'd suggest a more careful read through. Unless out of hand, you give zero credibility to first hand accounts given to the NYT by both Mirati and Sustkever...

              This piece is built on conjecture from a source whose identify is withheld. The sources version of events is openly refuted by the parties in question. Offering it as evidence that Mirati intentionally made political moves in order to get Altman ousted is an indefensible position.

              'Mr. Sutskever’s lawyer, Alex Weingarten, said claims that he had approached the board were “categorically false.”'

              'Marc H. Axelbaum, a lawyer for Ms. Murati, said in a statement: “The claims that she approached the board in an effort to get Mr. Altman fired last year or supported the board’s actions are flat wrong. She was perplexed at the board’s decision then, but is not surprised that some former board members are now attempting to shift the blame to her.” In a message to OpenAI employees after publication of this article, Ms. Murati said she and Mr. Altman “have a strong and productive partnership and I have not been shy about sharing feedback with him directly.”

              She added that she did not reach out to the board but “when individual board members reached out directly to me for feedback about Sam, I provided it — all feedback Sam already knew,” and that did not mean she was “responsible for or supported the old board’s actions.”'

              This part of NYT piece is supported by evidence:

              'Ms. Murati wrote a private memo to Mr. Altman raising questions about his management and also shared her concerns with the board. That move helped to propel the board’s decision to force him out.'

              INTENT matters. Mirati says the board asked for her concerns about Altmans. She provided it and had already brought it to Altmans attention... in writing. Her actions demonstrate transparency and professionalism.

  • jsheard 3 days ago

    > It is hard for me to square "This company is a few short years away from building world-changing AGI"

    Altmans quote was that "it's possible that we will have superintelligence in a few thousand days", which sounds a lot more optimistic on the surface than it actually is. A few thousand days could be interpreted as 10 years or more, and by adding the "possibly" qualifier he didn't even really commit to that prediction.

    It's hype with no substance, but vaguely gesturing that something earth-shattering is coming does serve to convince investors to keep dumping endless $billions into his unprofitable company, without risking the reputational damage of missing a deadline since he never actually gave one. Just keep signing those 9 digit checks and we'll totally build AGI... eventually. Honest.

    • ben_w 3 days ago

      Between 1 and 10 thousands of days, so 3 to 27 years.

      A range I'd agree with; for me, "pessimism" is the shortest part of that range, but even then you have to be very confident the specific metaphorical horse you're betting on is going to be both victorious in its own right and not, because there's no suitable existing metaphor, secretly an ICBM wearing a patomime costume.

      • dimitri-vs 3 days ago

        Just in time for them to figure out fusion to power all the GPUs.

        But really. o1 has been very whelming, nothing like the step up from 3.5 to 4. Still prefer sonnet3.5 and opus.

      • zooq_ai 3 days ago

        1 you use 1

        2 (or even 3) you use "a couple"

        A few is almost always > 3 and one could argue that upper limit 15

        So, 10 years to 50 years

        • usaar333 3 days ago

          few is not > 3. Literally it's just >= 2, though I think >= 3 is the common definition.

          15 is too high to be a "few" except in contexts of a few out of tens of thousands of items.

          Realistically I interpret this as 3-7 thousands of days (8 to 19 years), which is largely consensus prediction range anyway.

          • rsynnott 3 days ago

            While it's not really _wrong_ to describe two things as 'a few', as such, it's unusual and people don't really do it in standard English.

            That said, I think people are possibly overanalysing this very vague barely-even-a-claim just a little. Realistically, when a tech company makes a vague claim about what'll happen in 10 years, that should be given precisely zero weight; based on historical precedent you might as well ask a magic 8-ball.

        • ben_w 3 days ago

          Personally speaking, above 10 thousand I'd switch to saying "a few tens of thousands".

          But the mere fact you say 15 is arguable does indeed broaden the range, just as me saying 1 broadens it in the opposite extent.

        • fvv 3 days ago

          You imply that he knows exactly when which imo is not and could even be next year for what we knows.. Who know every paper yet to be published??

    • 015a 3 days ago

      Because as we all know: Full Self Driving is just six months away.

      • squarefoot 3 days ago

        Thanks, now I cannot unthink of this vision: developers activate the first ASI, and after 3 minutes it spits out full code and plans for a working Full Self Driving car prototype:)

        • blitzar 3 days ago

          I thought super-intelligence was to say self driving would be fully operational next year for 10 consecutive years?

          • squarefoot 2 days ago

            My point was that only super intelligence could possibly solve a problem that we can only pretend to have solved.

    • petre 3 days ago

      > it's possible that we will have superintelligence in a few thousand days

      Sure, a few thousand days and a few trillion $ away. We'll also have full self driving next month. This is just like the fusion is the energy of the future joke: it's 30 years away and it will always be.

      • actionfromafar 3 days ago

        Now it’s 20 years away! It took 50 years for it to go from 30 to 20 years away. So maybe, in another 50 years it will be 10 years away?

    • z7 3 days ago

      >Altmans quote was that AGI "could be just a few thousand days away" which sounds a lot more optimistic on the surface than it actually is.

      I think he was referring to ASI, not AGI.

      • umeshunni 3 days ago

        Isn't ASI > AGI?

        • ben_w 3 days ago

          Both are poorly defined.

          By all the standards I had growing up, ChatGPT is already AGI. It's almost certainly not as economically transformative as it needs to be to meet OpenAI's stated definition.

          OTOH that may be due to limited availability rather than limited quality: if all the 20 USD/month for Plus gets spent on electricity to run the servers, at $0.10/kWh, that's about 274 W average consumption. Scaled up to the world population, that's approximately the entire global electricity supply. Which is kinda why there's also all the stories about AI data centres getting dedicated power plants.

          • Spivak 3 days ago

            Don't know why you're being downvoted, these models meet the definition of AGI. It just looks different than perhaps we expected.

            We made a thing that exhibits the emergent property of intelligence. A level of intelligence that trades blows with humans. The fact that our brains do lots of other things to make us into self-contained autonomous beings is cool and maybe answers some questions about what being sentient means but memory and self-learning aren't the same thing as intelligence.

            I think it's cool that we got there before simulating an already existing brain and that intelligence can exist separate from consciousness.

        • CaptainFever 3 days ago

          Is the S here referring to Sentient or Specialised?

          • ben_w 3 days ago

            Super(human).

            Old-school AI was already specialised. Nobody can agree what "sentient" is, and if sentience includes a capacity to feel emotions/qualia etc. then we'd only willingly choose that over non-sentient for brain uploading not "mere" assistants.

          • romanhn 3 days ago

            Super, whatever that means

      • bottlepalm 3 days ago

        Given that ChatGPT is already smarter and faster than humans in many different metrics. Once the other metrics catch up with humans it will still be better than humans in the existing metrics. Therefore there will be no AGI, only ASI.

        • threeseed 3 days ago

          My fridge is already smarter and faster than humans in many different metrics.

          Has been this way since calculation machines were invented hundreds of years ago.

          • rsynnott 3 days ago

            _Thousands_; an abacus can outperform any unaided human at certain tasks.

    • vasco 3 days ago

      OpenAI is a Microsoft play to get into power generation business, specifically nuclear, which is a pet interest of Bill Gates for many years.

      There, that's my conspiracy theory quota for 2024 in one comment.

      • kolbe 3 days ago

        I don't think Gates has much influence on Microsoft these days.

        • basementcat 3 days ago

          He controls approximately 1% of the voting shares of MSFT.

          • kolbe 3 days ago

            And I would argue his "soft power" is greatly diminished as well

      • PoignardAzur 3 days ago

        It's kinda cool as a conspiracy theory. It's just reasonable enough if you don't know any of the specifics. And the incentives mostly make sense, if you don't look too closely.

    • theGnuMe 3 days ago

      To paraphrase a notable example: We will have full self driving capability next year..

  • blihp 3 days ago

    This was the company that made all sorts of noise about how they couldn't release GPT-2 to the public because it was too dangerous[1]. While there are many very useful applications being developed, OpenAI's main deliverable appears to be hype that I suspect when it's all said and done they will fail to deliver on. I think the main thing they are doing quite successfully is cashing in on the hype before people figure it out.

    [1] https://slate.com/technology/2019/02/openai-gpt2-text-genera...

    • johnfn 3 days ago

      GPT-2 and descendants have polluted the internet with AI spam. I don't think that this is too unreasonable of a claim.

  • shmatt 3 days ago

    I feel like this is stating the obvious - but i guess not to many - but a probabilistic syllable generator is not intelligence, it does not understand us, it cannot reason, it can only generate the next syllable

    It makes us feel understood in the same ways John Edward used to in daytime tv, its all about how language makes us feel

    true AGI...unfortunately we're not even close

    • lumenwrites 3 days ago

      "Intelligence" is a poorly defined term prone to arguments about semantics and goalpost shifting.

      I think it's more productive to think about AI in terms of "effectiveness" or "capability". If you ask it, "what is the capital of France?", and it replies "Paris" - it doesn't matter whether it is intelligent or not, it is effective/capable at identifying the capital of France.

      Same goes for producing an image, writing SQL code that works, automating some % of intellectual labor, giving medical advice, solving an equation, piloting a drone, building and managing a profitable company. It is capable of various things to various degrees. If these capabilities are enough to make money, create risks, change the world in some significant way - that is the part that matters.

      Whether we call it "intelligence" or "probabilistically generaring syllables" is not important.

    • atleastoptimal 3 days ago

      it can actually solve problems though, its not just an illusion of intelligence if it does the stuff we considered mere years ago sufficient to be intelligent. But you and others keep moving the goalposts as benchmarks saturate, perhaps due to a misplaced pride in the specialness of human intelligence.

      I understand the fear, but the knee jerk response “its just predicting the next token thus could never be intelligent” makes you look more like a stochastic parrot than these models are.

      • ssnistfajen 3 days ago

        It solves problems because it was trained with the solutions to these problems that have been written down a thousand times before. A lot of people don't even consider the ability to solve problems to be a reliable indicator of human intelligence, see the constantly evolving discourse regarding standardized tests.

        Attempts at autonomous AI agents are still failing spectacularly because the models don't actually have any thought or memory. Context is provided to them via prefixing the prompt with all previous prompts which obviously causes significant info loss after a few interaction loops. The level of intellectual complexity at play here is on par with nematodes in a lab (which btw still can't be digitally emulated after decades of research). This isn't a diss on all the smart people working in AI today, bc I'm not talking about the quality of any specific model available today.

        • atleastoptimal 2 days ago

          You're acting like 99% of humans aren't very much dependent on that same scaffolding. Humans spend 12+ years in school, their brains being hammered with the exact rules of math, grammar, and syntax. To perform our jobs, we often consult documentation or other people performing the same task. Only after much extensive, deep thought can we extrapolate usefully beyond our training set.

          LLM's do have memory and thought. I've invented a few somewhat unusual games, described it to Sonnet 3.5 and it reproduces it in code almost perfectly. Likewise its memory has been scaling. Just a couple years ago context windows were 8000 tokens maximum, now they're reaching the millions.

          I feel like you're approaching all these capabilities with a myopic viewpoint, then playing semantic judo to obfuscate the nature of these increases as "not counting" since they can be vaguely mapped to something that has a negative connotation.

          >A lot of people don't even consider the ability to solve problems to be a reliable indicator of intelligence

          That's a very bold statement, as lots of smart people have said that the very definition of intelligence is the ability to solve problems. If fear of the effectiveness of LLM's in behaving genuinely intelligently leads you to making extreme sweeping claims on what intelligence doesn't count as, then you're forcing yourself into a smaller and smaller corner as AI SOTA capabilities predictably increase month after month.

      • caconym_ 3 days ago

        The "goalposts" are "moving" because now (unlike "mere years ago") we have real AI systems that are at least good enough to be seriously compared with human intelligence. We aren't vaguely speculating about what such an AI system might be like^[1]; we have the real thing now, and we can test its capabilities and see what it is like, what it's good at, and what it's not so good at.

        I think your use of the "goalposts" metaphor is telling. You see this as a team sport; you see yourself on the offensive, or the defensive, or whatever. Neither is conducive to a balanced, objective view of reality. Modern LLMs are shockingly "smart" in many ways, but if you think they're general intelligence in the same way humans have general intelligence (even disregarding agency, learning, etc.), that's a you problem.

        ^[1] I feel the implicit suggestion that there was some sort of broad consensus on this in the before-times is revisionism.

        • atleastoptimal 2 days ago

          > but if you think they're general intelligence in the same way humans have general intelligence (even disregarding agency, learning, etc.), that's a you problem.

          How is it a me problem? The idea of these models being intelligent is shared with a large number of researchers and engineers in the field. Such is clearly evident when you can ask o1 some random completely novel question about a hypothetical scenario and it gets the implication you're trying to make with it very well.

          I feel that simultaneously praising their abilities while claiming that they still aren't intelligent "in the way humans are" is just obscure semantic judo meant to stake an unfalsifiable claim. There will always be somewhat of a difference between large neural networks and human brains, but the significance of the difference is a subjective opinion depending on what you're focusing on. I think it's much more important to focus on the realm of "useful, hard things that are unique to intelligent systems and their ability to understand the world" is more important than "Possesses the special kind of intelligence that only humans have".

          • caconym_ 2 days ago

            > I think it's much more important to focus on the realm of "useful, hard things that are unique to intelligent systems and their ability to understand the world" is more important than "Possesses the special kind of intelligence that only humans have".

            This is a common strawman that appears in these conversations—you try to reframe my comments as if I'm claiming human intelligence runs on some kind of unfalsifiable magic that a machine could never replicate. Of course, I've suggested no such thing, nor have I suggested that AI systems aren't useful.

    • HeatrayEnjoyer 3 days ago

      This overplayed knee jerk response is so dull.

    • svara 3 days ago

      I truly think you haven't really thought this through.

      There's a huge amount of circuitry between the input and the output of the model. How do you know what it does or doesn't do?

      Humans brains "just" output the next couple milliseconds of muscle activation, given sensory input and internal state.

      Edit: Interestingly, this is getting downvotes even though 1) my last sentence is a precise and accurate statement of the state of the art in neuroscience and 2) it is completely isomorphic to what the parent post presented as an argument against current models being AGI.

      To clarify, I don't believe we're very close to AGI, but parent's argument is just confused.

      • 015a 3 days ago

        Did you seriously just use the word "isomorphic"? No wonder people believe AI is the next crypto.

        • svara 3 days ago

          Well, AI clearly is the next crypto, haha.

          Apologies for the wording but I think you got it and the point stands.

          I'm not a native speaker and mostly use English in a professional science related setting, that's why I sound like that sometimes.

          isomorphic - being of identical or similar form, shape, or structure (m-w). Here metaphorically applied to the structure of an argument.

        • edouard-harris 3 days ago

          In what way was their usage incorrect? They simply said that the brain just predicts next-actions, in response to a statement that an LLM predicts next-tokens. You can believe or disbelieve either of those statements individually, but the claims are isomorphic in the sense that they have the same structure.

          • 015a 2 days ago

            Its not that it was used incorrectly: Its that it isn't a word actual humans use, and its one of a handful of dog whistles for "I'm a tech grifter who has at best a tenuous grasp on what I'm talking about but would love more venture capital". The last time I've personally heard it spoken was from Beff Jezos/Guillaume Verdon.

            • svara 2 days ago

              You know, you can just talk to me about my wording. Where do I meet those gullible venture investors?

            • NoGravitas 2 days ago

              I think we should delve further into that analysis.

      • HarHarVeryFunny 3 days ago

        > There's a huge amount of circuitry between the input and the output of the model

        Yeah - but it's just a stack of transformer layers. No looping, no memory, no self-modification (learning). Also, no magic.

        • svara 3 days ago

          No looping, but you can unroll loops to a fixed depth and apply the model iteratively. There obviously is memory and learning.

          Neuroscience hasn't found the magic dust in our brains yet, either. ;)

          • HarHarVeryFunny 2 days ago

            Zero memory inside the model from one input (ie token output) to the next (only the KV cache, which is just an optimization). The only "memory" is what the model outputs and therefore gets to re-consume (and even there it's an odd sort of memory since the model itself didn't exactly choose what to output - that's a random top-N sampling).

            There is no real runtime learning - certainly no weight updates. The weights are all derived from pre-training, and so the runtime model just represents a frozen chunk of learning. Maybe you are thinking of "in-context learning", which doesn't update the weights, but is rather the ability of the model to use whatever is in the context, including having that "reinforced" by repetition. This is all a poor substitute for what an animal does - continuously learning from experience and exploration.

            The "magic dust" in our brains, relative to LLMs, is just a more advanced and structure architecture, and operational dynamics. e.g. We've got the thalamo-cortical loop, massive amounts of top-down feedback for incremental learning from prediction failure, working memory, innate drives such as curiosity (prediction uncertainty) and boredom to drive exploration and learning, etc, etc. No magic, just architecture.

            • svara 2 days ago

              I'm not entirely sure what you're arguing for. Current AI models can still get a lot better, sure. I'm not in the AGI in 3 years camp.

              But, people in this thread are making philosophically very poor points about why that is supposedly so.

              It's not "just" sequence prediction, because sequence prediction is the very essence of what the human brain does.

              Your points on learning and memory are similarly weak word play. Memory means holding some quantity constant over time in the internal state of a model. Learning means being able to update those quantities. LLMs obviously do both.

              You're probably going to be thinking of all sorts of obvious ways in which LLMs and humans are different.

              But no one's claiming there's an artificial human. What does exist is increasingly powerful data processing software that progressively encroaches on domains previously thought to be that of humans only.

              And there may be all sorts of limitations to that, but those (sequences, learning, memory) aren't them.

              • HarHarVeryFunny 2 days ago

                > It's not "just" sequence prediction, because sequence prediction is the very essence of what the human brain does.

                Agree wrt the brain.

                Sure, LLMs are also sequence predictors, and this is a large part of why they appear intelligent (intelligence = learning + prediction). The other part is that they are trained to mimic their training data, which came from a system of greater intelligence than their own, so by mimicking a more intelligent system they appear to be punching above their weight.

                I'm not sure that "JUST sequence predictors" is so inappropriate though - sure sequence prediction is a powerful and critical capability (the core of intelligence), but that is ALL that LLMs can do, so "just" is appropriate.

                Of course additionally not all sequence predictors are of equal capability, so we can't even say, "well, at least as far as being sequence predictors goes, they are equal to humans", but that's a difficult comparison to make.

                > Your points on learning and memory are similarly weak word play. Memory means holding some quantity constant over time in the internal state of a model. Learning means being able to update those quantities. LLMs obviously do both.

                Well, no...

                1) LLMs do NOT "hold some quantity constant over time in the internal state of the model". It is a pass-thru architecture with zero internal storage. When each token is generated it is appended to the input, and the updated input sequence is fed into the model and everything is calculated from scratch (other than the KV cache optimization). The model appears to be have internal memory due to the coherence of the sequence of tokens it is outputting, but in reality everything is recalculated from scratch, and the coherence is due to the fact that adding one token to the end of a sequence doesn't change the meaning of the sequence by much, and most of what is recalculated will therefore be the same as before.

                2) If the model has learnt something, then it should have remembered it from one use to another, but LLMs don't do this. Once the context is gone and the user starts a new conversation/session, then all memory of the prior session is gone - the model has NOT updated itself to remember anything about what happened previously. If this was an employee (an AI coder, perhaps) then it would be perpetual groundhog day. Every day it came to work it'd be repeating the same mistakes it made the day before, and would have forgotten everything you might have taught it. This is not my definition of learning, and more to the point the lack of such incremental permanent learning is what'll make LLMs useless for very many jobs. It's not an easy fix, which is why we're stuck with massively expensive infrequent retrainings from scratch rather than incremental learning.

        • HeatrayEnjoyer 2 days ago

          >no memory, no self-modification (learning).

          This is also true of those with advanced Alzheimer's disease. Are they not conscious as well? If we believe they are conscious then memory and learning must not be essential ingredients.

          • HarHarVeryFunny 2 days ago

            I'm not sure what you're trying to say.

            I thought we're talking about intelligence, not consciousness, and limitations of the LLM/transformer architecture that limit their intelligence compared to humans.

            In fact LLMs are not only architecturally limited, but they also give the impression of being far more intelligent than they actually are due to mimicking training sources that are more intelligent than the LLM itself is.

            If you want to bring consciousness into the discussion, then that is basically just the brain modelling itself and the subjective experience that gives rise to. I expect it arose due to evolutionary adaptive benefit - part of being a better predictor (i.e. more intelligent) is being better able to model your own behavior and experiences, but that's not a must-have for intelligence.

            • og_kalu 2 days ago

              LLMs are predictors not imitators. They don't "mimick". They predict and that's a pretty big difference.

              • HarHarVeryFunny 2 days ago

                Well, it's typically going to be a collective voice, not an individual, but they are certainly mimicking ... they are trying to predict what the collective voice will say next - to mimick it.

                • og_kalu a day ago

                  No it's more like they are trying to predict what some given human might say(#amongst other things).

                  a pretrained transformer in the limit does not converge on any collective or consensus state in that sense and in fact, pre-training actually punishes this. It learns to predict the words of Feynman as readily as the dumbass across the street.

                  When i say that GPT does not mimic, i mean that the training objective literally optimizes for beyond that.

                  Consider <Hash, plaintext> pairs. You can't predict this without cracking the hash algorithm, but you could easily fool a GAN's discriminator(one that has learnt to compute hash functions) just by generating typical instances.

                  # Consider that some of the text on the Internet isn't humans casually chatting or extemporaneous speech. It's the results section of a science paper. It's news stories that say what happened on a particular day. It's text that people crafted over hours or days.

          • lewhoo 2 days ago

            I don't think that's a good example. People with Alzheimer's have, to put it simply, damaged memory, but not complete lack of. We're talking about a situation where a person wouldn't be even conscious of being a human/person unless they were told so as part of the current context window. Right ?

    • CooCooCaCha 3 days ago

      I'm not saying you're wrong but you could use this reductive rhetorical strategy to dismiss any AI algorithm. "It's just X" is frankly shallow criticism.

      • timr 3 days ago

        And you can dismiss any argument with your response.

        "Your argument is just a reductive rhetorical strategy."

        • CooCooCaCha 3 days ago

          Sure if you ignore context.

          "a probabilistic syllable generator is not intelligence, it does not understand us, it cannot reason" is a strong statement and I highly doubt it's backed by any sort of substance other than "feelz".

          • timr 3 days ago

            I didn't ignore any more context than you did, but just I want to acknowledge the irony that "context" (specifically, here, any sort of memory that isn't in the text context window) is exactly what is lacking with these models.

            For example, even the dumbest dog has a memory, a strikingly advanced concept model of the world [1], a persistent state beyond the last conversation history, and an ability to reason (that doesn't require re-running the same conversation sixteen bajillion times in a row). Transformer models do not. It's really cool that they can input and barf out realistic-sounding text, but let's keep in mind the obvious truths about what they are doing.

            [1] "I like food. Something that smells like food is in the square thing on the floor. Maybe if I tip it over food will come out, and I will find food. Oh no, the person looked at me strangely when I got close to the square thing! I am in trouble! I will have to do it when they're not looking."

            • CooCooCaCha 3 days ago

              > that doesn't require re-running the same conversation sixteen bajillion times in a row

              Lets assume the dog visual systems run at 60 frames per second. If it takes 1 second to flip a bowl of food over then that's 60 datapoints of cause-effect data that the dog's brain learned from.

              Assuming it's the same for humans, lets say I go on a trip to the grocery store for 1 hour. That's 216,000 data points from one trip. Not to mention auditory data, touch, smell, and even taste.

              > ability to reason [...] Transformer models do not

              Can you tell me what reasoning is? Why can't transformers reason? Note I said transformers not llm's. You could make a reasonable (hah) case that current LLMs cannot reason (or at least very well) but why are transformers as an architecture doomed?

              What about chain of thought? Some have made the claim that chain of thought adds recurrence to transformer models. That's a pretty big shift, but you've already decided transformers are a dead end so no chance of that making a difference right?

      • iLoveOncall 3 days ago

        And there's nothing wrong about that: the fact that _artificial intelligence_ will never lead to general intelligence isn't exactly a hot take.

        • CooCooCaCha 3 days ago

          That's both a very general and very bold claim. I don't think it's unreasonable to say that's too strong of a claim given how we don't know what is possible yet and there's frankly no good reason to completely dismiss the idea of artificial general intelligence.

          • NoGravitas 2 days ago

            I think the existence of biological general intelligence is a proof-by-existence for artificial general intelligence. But at the same time, I don't think LLM and similar techniques are likely in the evolutionary path of artificial general intelligence, if it ever comes to exist.

            • CooCooCaCha 2 days ago

              That's fair. I think it could go either way. It just bugs me when people are so certain and it's always some shallow reason about "probability" and "it just generates text".

        • dr_dshiv 3 days ago

          It’s almost trolling at this point, though.

      • paxys 3 days ago

        > to dismiss any AI algorithm

        Or even human intelligence

    • ttul 3 days ago

      While it's true that language models are fundamentally based on statistical patterns in language, characterizing them as mere "probabilistic syllable generators" significantly understates their capabilities and functional intelligence.

      These models can engage in multistep logical reasoning, solve complex problems, and generate novel ideas - going far beyond simply predicting the next syllable. They can follow intricate chains of thought and arrive at non-obvious conclusions. And OpenAI has now showed us that fine-tuning a model specifically to plan step by step dramatically improves its ability to solve problems that were previously the domain of human experts.

      Although there is no definitive evidence that state-of-the-art language models have a comprehensive "world model" in the way humans do, several studies and observations suggest that large language models (LLMs) may possess some elements or precursors of a world model.

      For example, Tegmark and Gurnee [1] found that LLMs learn linear representations of space and time across multiple scales. These representations appear to be robust to prompting variations and unified across different entity types. This suggests that modern LLMs may learn rich spatiotemporal representations of the real world, which could be considered basic ingredients of a world model.

      And even if we look at much smaller models like Stable Diffusion XL, it's clear that they encode a rich understanding of optics [2] within just a few billion parameters (3.5 billion to be precise). Generative video models like OpenAI's Sora clearly have a world model as they are able to simulate gravity, collisions between objects, and other concepts necessary to render a coherent scene.

      As for AGI, the consensus on Metaculus is that it will arrive in 2023. But consider that before GPT-4 arrived, the consensus was that full AGI was not coming until 2041 [3]. The consensus for the arrival date of "weakly general" AGI is 2027 [4] (i.e AGI that doesn't have a robotic physical world component). The best tool for achieving AGI is the transformer and its derivatives; its scaling keeps going with no end in sight.

      Citations:

      [1] https://paperswithcode.com/paper/language-models-represent-s...

      [2] https://www.reddit.com/r/StableDiffusion/comments/15he3f4/el...

      [3] https://www.metaculus.com/questions/5121/date-of-artificial-...

      [4] https://www.metaculus.com/questions/3479/date-weakly-general...

      • iLoveOncall 3 days ago

        > Generative video models like OpenAI's Sora clearly have a world model as they are able to simulate gravity, collisions between objects, and other concepts necessary to render a coherent scene.

        I won't expand on the rest, but this is simply nonsensical.

        The fact that Sora generates output that matches its training data doesn't show that it has a concept of gravity, collision between object, or anything else. It has a "world model" the same way a photocopier has a "document model".

        • svara 3 days ago

          My suspicion is that you're leaving some important parts in your logic unstated. Such as belief in a magical property within humans of "understanding", which you don't define.

          The ability of video models to generate novel video consistent with physical reality shows that they have extracted important invariants - physical law - out of the data.

          It's probably better not to muddle the discussion with ill defined terms such as "intelligence" or "understanding".

          I have my own beef with the AGI is nigh crowd, but this criticism amounts to word play.

          • phatfish 3 days ago

            It feels like if these image and video generation models were really resolving some fundamental laws from the training data they should at least be able to re-create an image at a different angle.

          • some1else 3 days ago

            "Allegory of the cave" comes to mind, when trying to describe the understanding that's missing from diffusion models. I think a super-model with such qualifications would require a number of ControlNets in a non-visual domains to be able to encode understanding of the underlying physics. Diffusion models can render permutations of whatever they've seen fairly well without that, though.

            • svara 3 days ago

              I'm very familiar with the allegory of the cave, but I'm not sure I understand where you're going with the analogy here.

              Are you saying that it is not possible to learn about dynamics in a higher dimensional space from a lower dimensional projection? This is clearly not true in general.

              E.g., video models learn that even though they're only ever seeing and outputting 2d data, objects have different sides in a fashio that is consistent with our 3d reality.

              The distinctions you (and others in this thread) are making is purely one of degree - how much generalization has been achieved, and how well - versus one of category.

      • PollardsRho 3 days ago

        > its scaling keeps going with no end in sight.

        Not only are we within eyesight of the end, we're more or less there. o1 isn't just scaling up parameter count 10x again and making GPT-5, because that's not really an effective approach at this point in the exponential curve of parameter count and model performance.

        I agree with the broader point: I'm not sure it isn't consistent with current neuroscience that our brains aren't doing anything more than predicting next inputs in a broadly similar way, and any categorical distinction between AI and human intelligence seems quite challenging.

        I disagree that we can draw a line from scaling current transformer models to AGI, however. A model that is great for communicating with people in natural language may not be the best for deep reasoning, abstraction, unified creative visions over long-form generations, motor control, planning, etc. The history of computer science is littered with simple extrapolations from existing technology that completely missed the need for a paradigm shift.

        • versteegen 3 days ago

          The fact that OpenAI created and released o1 doesn't mean they won't also scale models upwards or don't think it's their best hope. There's been plenty said implying that they are.

          I definitely agree that AGI isn't just a matter of scaling transformers, and also as you say that they "may not be the best" for such tasks. (Vanilla transformers are extremely inefficient.) But the really important point is that transformers can do things such as abstract, reason, form world models and theories of minds, etc, to a significant degree (a much greater degree than virtually anyone would have predicted 5-10 years ago), all learnt automatically. It shows these problems are actually tractable for connectionist machine learning, without a paradigm shift as you and many others allege. That is the part I disagree with. But more breakthroughs needed.

          • ttul 2 days ago

            To whit: OpenAI was until quite recently investigating having TSMC build a dedicated semiconductor fab to produce OpenAI chips [1]:

            (Translated from Chinese) > According to industry insiders, OpenAI originally actively negotiated with TSMC to build a dedicated wafer factory. However, after evaluating the development benefits, it shelved the plan to build a dedicated wafer factory. Strategically, OpenAI sought cooperation with American companies such as Broadcom and Marvell for its own ASIC chips. Development, among which OpenAI is expected to become Broadcom's top four customers.

            [1] https://money.udn.com/money/story/5612/8200070 (Chinese)

            Even if OpenAI doesn't build its own fab -- a wise move, if you ask me -- the investment required to develop an ASIC on the very latest node is eye watering. Most people - even people in tech - just don't have a good understanding of how "out there" semiconductor manufacturing has become. It's basically a dark art at this point.

            For instance, TSMC themselves [2] don't even know at this point whether the A16 node chosen by OpenAI will require using the forthcoming High NA lithography machines from ASML. The High NA machines cost nearly twice as much as the already exceptional Extreme Ultraviolet (EUV) machines do. At close to $400M each, this is simply eye watering.

            I'm sure some gurus here on HN have a more up to date idea of the picture around A16, but the fundamental news is this: If OpenAI doesn't think scaling will be needed to get to AGI, then why would they be considering spending many billions on the latest semiconductor tech?

            Citations: [1] https://www.phonearena.com/news/apple-paid-twice-as-much-for... [2] https://www.asiabusinessoutlook.com/news/tsmc-to-mass-produc...

    • Erem 3 days ago

      The only useful way to define an AGI is based on its capabilities, not its implementation details.

      Based on capabilities alone, current LLMs demonstrate many of the capabilities practitioners ten years ago would have tossed into the AGI bucket.

      What are some top capabilities (meaning inputs and outputs) you think are missing on the path between what we have now and AGI?

  • paxys 3 days ago

    Regardless of where AI currently is and where it is going, you don't simply quit as CTO of the company that is leading the space by far in terms of technology, products, funding, revenue, popularity, adoption and just about everything else. She was fired, plain and simple.

    • rvnx 3 days ago

      You can leave and be happy with 30M+ USD in stocks and prospects of easy to find a job also.

    • noiwillnot 3 days ago

      > leading the space by far in terms of technology, products, funding, revenue, popularity, adoption and just about everything else

      I am not 100% sure that they are still clearly leading the technology part, but agree in all other accounts.

    • piuantiderp 3 days ago

      Or you are disgusted and leave. Are there things more important than money? You'd certainly be certain the OpenAI founders sold themselves as, not'in'it'for'the money.

  • f0e4c2f7 3 days ago

    There is one clear answer in my opinion:

    There is a secondary market for OpenAI stock.

    It's not a public market so nobody knows how much you're making if you sell, but if you look at current valuations it must be a lot.

    In that context, it would be quite hard not to leave and sell or stay and sell. What if oai loses the lead? What if open source wins? Keeping the stock seems like the actual hard thing to me and I expect to see many others leave (like early googlers or Facebook employees)

    Sure it's worth more if you hang on to it, but many think "how many hundreds of M's do I actually need? Better to derisk and sell"

    • chatcode 3 days ago

      What would you do if

      a) you had more money than you'll ever need in your lifetime

      b) you think AI abundance is just around the corner, likely making everything cheaper

      c) you realize you still only have a finite time left on this planet

      d) you have non-AGI dreams of your own that you'd like to work on

      e) you can get funding for anything you want, based on your name alone

      Do you keep working at OpenAI?

  • orionsbelt 3 days ago

    Maybe she thinks the _world_ is a few short years away from building world-changing AGI, not just limited to OpenAI, and she wants to compete and do her own thing (and easily raise $1B like Ilya).

    • xur17 3 days ago

      Which is arguably a good thing (having AGI spread amongst multiple entities rather than one leader).

      • tomrod 3 days ago

        The show Person of Interest comes to mind.

        • tempodox 3 days ago

          Samaritan will take us by the hand and lead us safely through this brave new world.

      • HeatrayEnjoyer 3 days ago

        How is that good? An arms race increases the pressure to go fast and disregard alignment safety, non proliferation is essential.

        • ssnistfajen 3 days ago

          Probably off-topic for this thread but my own rather fatalist view is alignment/safety is a waste of effort if AGI will happen. True AGI will be able to self-modify at a pace beyond human comprehension, and won't be obligated to comply with whatever values we've set for it. If it can be reined in with human-set rules like a magical spell, then it is not AGI. If humans have free will, then AGI will have it too. Humans frequently go rogue and reject value systems that took decades to be baked into them. There is no reason to believe AGI won't do the same.

        • PhilipRoman 2 days ago

          Feels like the pope trying to ban crossbows tbh.

    • zooq_ai 3 days ago

      I can't imagine investor pouring money on her. She has zero credibility both hardcore STEM like Ilya or a visionary like Jobs/Musk

      • phatfish 3 days ago

        "Credibility" has nothing to do with how much money rich people are willing to give you.

      • KoftaBob 3 days ago

        She was the CTO, how does she not have STEM credibility?

        • peanuty1 3 days ago

          Has she published a single AI research paper?

        • zooq_ai 3 days ago

          Sometimes with good looks and charm, you can fall up.

          https://en.wikipedia.org/wiki/Mira_Murati

          Point me to a single credential where you feel confident of putting your money on her?

          • csomar 3 days ago

            She studied math early on, so she's definitively technical. She is the CTO, so she kinda needs to balance the managerial while having enough understanding of the underlying technical.

            • zooq_ai 2 days ago

              Again, it's easy to be a CTO for a startup. You just have to be at the right time. Your role is literally is, do all the stuff Researchers/Engineers have to deal with. Do you really think Mira set the technical agenda, architecture for OpenAI?

              It's a pity that HN crowd doesn't go one-level deep and truly understand on first principles

  • apwell23 3 days ago

    Her rise didn't make sense to me. Product manager at tesla to CTO at openAI with no technical background and a deleted profile ?

    This is a very strange company to say the least.

    • mlazos 3 days ago

      Agreed, when a company rises to prominence so fast, I feel like you can end up with inexperienced people really high up in management. High risk high reward for them. The board was also like this - a lot of inexperienced random people leading a super consequential company resulting in the shenanigans we saw and now most of them are gone. Not saying inexperienced people are inherently bad, but they either grow into the role or don’t. Mira is probably very smart, but I don’t think you can go build a team around her like Ilya or other big name researchers. I’m happy for her with riding one of wildest rocket ships in the past 5 years at least but I don’t expect to hear much about her from now on.

    • nebula8804 3 days ago

      >Product manager at tesla to CTO at openAI with no technical background and a deleted profile ?

      Doesn't she have a dual bachelors in Mathematics and Mechanical Engineering?

      • apwell23 3 days ago

        Thats what is needed to get a job as a product manager these days?

        • nebula8804 3 days ago

          Well that and years of experience leading projects. Wasn't she head of the Model X program at Tesla?

          But my point is that she does have a technical background.

          • apwell23 3 days ago

            > Well that and years of experience leading projects. Wasn't she head of the Model X program at Tesla?

            No idea because she scrubbed her linkedin profile. But afaik she didn't have "years of experience leading projects" to get a job as leadpm at tesla. That was her first job as PM.

    • alephnerd 3 days ago

      A significant portion of the old guard at OpenAI was part of the Effective Altruism, AI Alignment, and Open Philanthropy movement.

      Most hiring in the foundational AI/model space is very nepotistic and biased towards people in that clique.

      Also, Elon Musk used to be the primary patron for OpenAI before losing interest during the AI Winter in the late 2010s.

      • comp_throw7 3 days ago

        Which has zero explanatory power w.r.t. Murati, since she's not part of that crowd at all. But her previously working at an Elon company seems like a plausible route, if she did in fact join before he left OpenAI (since he left in Feb 2018).

    • fzzzy 3 days ago

      You have to remember that OpenAI's mission was considered absolute batshit insane back then.

  • ren_engineer 3 days ago

    most of the people seem to be leaving due to the direction where Altman is taking OpenAI. It went from a charity to him seemingly doing everything possible to monetize it for himself both directly and indirectly by him trying to raise funds for AI adjacent traditionally structured companies he controlled

    probably not coincidence that she resigned at almost the same time the rumors about OpenAI completely removing the non-profit board are getting confirmed - https://www.reuters.com/technology/artificial-intelligence/o...

    • ethbr1 3 days ago

      Afaik, he's exceedingly driven to do that, because if they run out of money Microsoft gets to pick the carcass clean.

  • romanovcode 3 days ago

    Maybe she has inside info that it's not "around the corner". Making bigger and bigger models does not make AGI, not to mention exponential increase in power requirements for these models which would be basically unfeasible for mass market.

    Maybe, just maybe, we reached diminishing returns with AI, for now at least.

    • steinvakt 3 days ago

      People have been saying that we reached the limits of AI/LLMs since GPT4. Using o1-preview (which is barely a few weeks old) for coding, which is definitely an improvement, suggests there's still solid improvements going on, don't you think?

      • samatman 3 days ago

        Continued improvement is returns, making it inherently compatible with a diminishing returns scenario. Which I also suspect we're in now: there's no comparing the jump between GPT3.5 and GPT4 with GPT4 and any of the subsequent releases.

        Whether or not we're leveling out, only time will tell. That's definitely what it looks like, but it might just be a plateau.

    • xabadut 3 days ago

      + there are many untapped sources of data that contain information about our physical world, such as video

      the curse of dimensionality though...

  • tomrod 3 days ago

    My take is that Altman recognizes LLM winter is coming and is trying to entrench.

    • dartos 3 days ago

      I don’t think we’re gonna see a winter. LLMs are here to stay. Natural language interfaces are great. Embeddings are incredibly useful.

      They just won’t be the hottest thing since smartphones.

      • Yizahi 3 days ago

        LLMs as programs are here to stay. The issue is with expenses/revenue ratio all these LLM corpos have. According to Sequoia analyst (so not some anon on a forum) there is a giant money hole in that industry, and "giant" doesn't even begins to describe it (iirc it was 600bln this summer). That whole industry will definitely see winter soon, even if all things Altman says would be true.

      • 015a 3 days ago

        You just described what literally anyone who says "AI Winter" means; the technology doesn't go away, companies still deploy it and evolve it, customers still pay for it, it just stops being so attractive to massive funding and we see fewer foundational breakthroughs.

      • ForHackernews 3 days ago

        They're useful in some situations, but extremely expensive to operate. It's unclear if they'll be profitable in the near future. OpenAI seems to be claiming they need an extra $XXX billion in investment before they can...?

      • xtracto 3 days ago

        I just made a (IMHO) cool test with OpenAI/Linux/TCL-TK:

        "write a TCL/tk script file that is a "frontend" to the ls command: It should provide checkboxes and dropdowns for the different options available in bash ls and a button "RUN" to run the configured ls command. The output of the ls command should be displayed in a Text box inside the interface. The script must be runnable using tclsh"

        It didn't get it right the first time (for some reason wants to put a `mainloop` instruction) but after several corrections I got an ugly but pretty functional UI.

        Imagine a Linux Distro that uses some kind of LLM generated interfaces to make its power more accessible. Maybe even "self healing".

        LLMs don't stop amazing me personally.

        • ethbr1 3 days ago

          The issue (and I think what's behind the thinking of AI skeptics) is previous experience with the sharp edge of the Pareto principle.

          Current LLMs being 80% to being 100% useful doesn't mean there's only 20% effort left.

          It means we got the lowest-hanging 80% of utility.

          Bridging that last 20% is going to take a ton of work. Indeed, maybe 4x the effort that getting this far required.

          And people also overestimate the utility of a solution that's randomly wrong. It's exceedingly difficult to build reliable systems when you're stacking a 5% wrong solution on another 5% wrong solution on another 5% wrong solution...

          • nebula8804 3 days ago

            Thank You! You have explained the exact issue I (and probably many others) are seeing trying to adopt AI for work. It is because of this I don't worry about AI taking our jobs for now. You still need somewhat foundational knowledge in whatever you are trying to do in order to get that remaining 20%. Sometimes this means pushing back against the AI's solution, other times it means reframing the question, and other times its just giving up and doing the work yourself. I keep seeing all these impressive toy demos and my experience (Angular and Flask dev) seem to indicate that it is not going to replace any subject matter expert anytime soon. (And I am referring to all the three major AI players as I regularly and religiously test all their releases).

            >And people also overestimate the utility of a solution that's randomly wrong. It's exceedingly difficult to build reliable systems when you're stacking a 5% wrong solution on another 5% wrong solution on another 5% wrong solution...

            I call this the merry go round of hell mixed with a cruel hall of mirrors. LLM spits out a solution with some errors, you tell it to fix the errors, it produces other errors or totally forgets important context from one prompt ago. You then fix those issues, it then introduces other issues or messes up the original fix. Rinse and repeat. God help you if you don't actually know what you are doing, you'll be trapped in that hall of mirrors for all of eternity slowly losing your sanity.

            • theGnuMe 3 days ago

              and here we are arguing for internet points.

              • tomrod 3 days ago

                Much more meaningful to this existentialist.

        • dartos 3 days ago

          It can work with things of very limited scope, like that you describe.

          I wrote some data visualizations with Claude and aider.

          For anything that someone would actually pay for (expecting the robustness of paid-for software) I don’t think we’re there.

          The devil is in the details, after all. And detail is what you lose when running reality through a statistical model.

        • therouwboat 3 days ago

          Why make tool when you can just ask AI to give you filelist or files that you need?

      • eastbound 3 days ago

        It’s a glorified grammar corrector?

        • stocknoob 3 days ago

          TIL Math Olympiad problems are simple grammar exercises.

          • dartos 3 days ago

            They do way more than correcting grammar, but tbf, they did make something like 10,000 submissions to the math Olympiad to get that score.

            It’s not like it’ll do it consistently.

            Just a marketing stunt.

            • boroboro4 2 days ago

              You’re talking about informatics Olympiad and O-1. As for Google’s DeepMind network and math Olympiad it didn’t do 10000 submissions. It did however generated bunch of different solutions but it was all automatic (and consistent). We’re getting there.

        • ben_w 3 days ago

          If you consider responding to this:

          "oi i need lik a scrip or somfing 2 take pic of me screen evry sec for min, mac"

          with an actual (and usually functional) script to be "glorified grammar corrector", then sure.

        • CharlieDigital 3 days ago

          Not really.

          I think actually the best use case for LLMs is "explainer".

          When combined with RAG, it's fantastic at taking a complex corpus of information and distilling it down into more digestible summaries.

          • bot347851834 3 days ago

            Can you share an example of a use case you have in mind of this "explainer + RAG" combo you just described?

            I think that RAG and RAG-based tooling around LLMs is gonna be the clear way forward for most companies with a properly constructed knowledge base but I wonder what you mean by "explainer"?.

            Are you talking about asking an LLM something like "in which way did the teams working on project X deal with Y problem?" and then having it breaking it down for you? Or is there something more to it?

            • nebula8804 3 days ago

              I'm not the OP but I got some fun ones that I think are what you are asking? I would also love to hear others interesting ideas/findings.

              1. I got this medical provider that has a webapp that downloads graphql data(basically json) to the frontend and shows some of the data to the template as a result while hiding the rest. Furthermore, I see that they hide even more info after I pay the bill. I download all the data, combine it with other historical data that I have downloaded and dumped it into the LLM. It spits out interesting insights about my health history, ways in which I have been unusually charged by my insurance, and the speed at which the company operates based on all the historical data showing time between appointment and the bill adjusted for the time of year. It then formats everything into an open format that is easy for me to self host. (HTML + JS tables). Its a tiny way to wrestle back control from the company until they wise up.

              2. Companies are increasingly allowing customers to receive a "backup" of all the data they have on them(Thanks EU and California). For example Burger King/Wendys allow this. What do they give you when you request data? A zip file filled with just a bunch of crud from their internal system. No worries: Dump it into the LLM and it tells you everything that the company knows about you in an easy to understand format (Bullet points in this case). You know when the company managed to track you, how much they "remember", how much money they got out of you, your behaviors, etc.

              • tomrod 2 days ago

                #1 would be a good FLOSS project to release out.

                I don't understand enough about #2 to comment, but it's certainly interesting.

            • CharlieDigital 3 days ago

              If you go to https://clinicaltrials.gov/, you can see almost every clinical trial that's registered in the US.

              Some trials have their protocols published.

              Here's an example trial: https://clinicaltrials.gov/study/NCT06613256

              And here's the protocol: https://cdn.clinicaltrials.gov/large-docs/56/NCT06613256/Pro... It's actually relatively short at 33 pages. Some larger trials (especially oncology trials) can have protocols that are 200 pages long.

              One of the big challenges with clinical trials is making this information more accessible to both patients (for informed consent) and the trial site staff (to avoid making mistakes, helping answer patient questions, even asking the right questions when negotiating the contract with a sponsor).

              The gist of it here is exactly like you said: RAG to pull back the relevant chunks of a complex document like this and then LLM to explain and summarize the information in those chunks that makes it easier to digest. That response can be tuned to the level of the reader by adding simple phrases like "explain it to me at a high school level".

              • theGnuMe 3 days ago

                What's your experience with clinical trials?

                • CharlieDigital 3 days ago

                  Built regulated document management systems for supporting clinical trials for 14 years of my career.

                  The last system, I led one team competing for the Transcelerate Shared Investigator Portal (we were one of the finalist vendors).

                  Little side project: https://zeeq.ai

    • chinathrow 3 days ago

      Looking at ChatGPT or Claude coding output, it's already here.

      • criticalfault 3 days ago

        Bad?

        I just tried Gemini and it was useless.

        • mnk47 3 days ago

          Starting to wonder why this is so common in LLM discussions at HN.

          Someone says "X is the model that really impressive. Y is good too."

          Then someone responds "What?! I just used Z and it was terrible!"

          I see this at least once in practically every AI thread

          • rpmisms 3 days ago

            It depends on what you're writing. GPT-4 can pump out average React all day long. It's next to useless with Laravel.

          • tomrod 3 days ago

            Humans understand mean but struggle with variance.

        • fzzzy 3 days ago

          You're the one that chose to try Gemini for some reason.

        • andrewinardeer 3 days ago

          Google ought to hang its head in utter disgrace over the putrid swill they have the audacity to peddle under the Gemini label.

          Their laughably overzealous nanny-state censorship, paired with a model so appallingly inept it would embarrass a chatbot from the 90s, makes it nothing short of highway robbery that this digital dumpster fire is permitted to masquerade as a product fit for public consumption.

          The sheer gall of Google to foist this steaming pile of silicon refuse onto unsuspecting users borders on fraudulent.

  • elAhmo 3 days ago

    It would be definitely difficult thing to walk away.

    This is just one more in a series of massive red flags around this company, from the insanely convoluted governance scheme, over the board drama, to many executives and key people leaving afterwards. It feels like Sam is doing the cleanup and anyone who opposes him has no place at OpenAI.

    This, coming around the time where there are rumors of possible change to the corporate structure to be more friendly to investors, is an interesting timing.

  • Apocryphon 3 days ago

    What if she believes AGI is imminent and is relocating to a remote location to build a Faraday-shielded survival bunker.

    • ben_w 3 days ago

      Then she hasn't ((read or watched) and (found plausible)) any of the speculative fiction about how that's not enough to keep you safe.

      • Apocryphon 3 days ago

        No one knows how deep the bunker goes

        • ben_w 3 days ago

          We can be reasonably confident of which side of the Mohorovičić discontinuity it may be, as existing tools would be necessary to create it in the first place.

  • insane_dreamer 3 days ago

    What top executives write in these farewell letters often has little to do with their actual reasons for leaving.

  • golergka 3 days ago

    Among other perfectly reasonable theories mentioned here, people burn out.

    • PoignardAzur 3 days ago

      Yeah, if she wasn't deniably fired, then burnout is what Ockham's Razor leaves.

    • optimalsolver 3 days ago

      This isn't a delivery app we're talking about.

      "Burn out" doesn't apply when the issue at hand is AGI (and, possibly, superintelligence).

      • agentcoops 3 days ago

        Burnout, which doesn't need scare quotes, very much still applies for the humans involved in building AGI -- in fact, the burnout potential in this case is probably an order of magnitude higher than the already elevated chances when working through the exponential growth phase of a startup at such scale ("delivery apps" etc) since you'd have an additional scientific or societal motivation to ignore bodily limits.

        That said, I don't doubt that this particular departure was more the result of company politics, whether a product of the earlier board upheaval, performance related or simply the decision to bring in a new CTO with a different skill set.

      • kylehotchkiss 3 days ago

        That isn't fair. People need a break. "AGI" / "superintelligence" is not a cause with so much potential we should just damage a bunch of people on the route to it.

      • minimaxir 3 days ago

        Software is developed by humans, who can burn out for any reason.

      • jcranmer 3 days ago

        Why would you think burnout doesn't apply? It should be a possibility in pretty much any pursuit, since it's primarily about investing too much energy into a direction that you can't psychologically bring yourself to invest any more into it.

  • iLoveOncall 3 days ago

    People still believe that a company that has only delivered GenAI models is anywhere close to AGI?

    Success in not around any corner. It's pure insanity to even believe that AGI is possible, let alone close.

    • HeatrayEnjoyer 3 days ago

      What can you confidently say AI will not be able to do in 2029? What task can you declare, without hesitation, will not be possible for automatic hardware to accomplish?

      • iLoveOncall 3 days ago

        Easy: doing something that humans don't already do and program it to do.

        AI is incapable of any innovation. It accelerates human innovation, just like any other piece of software, but that's it. AI makes protein folding more efficient, but it can't ever come up with the concept of protein folding on its own. It's just software.

        You simply cannot have general intelligence without self-driven innovation. Not improvement, innovation.

        But if we look at much more simple concepts, 2029 is only 5 years (not even) away, so I'm pretty confident that anything that it cannot do right now it won't be able to do in 2029 either.

      • 015a 3 days ago

        Discover new physics.

  • hatthew 3 days ago

    Could also be that she just got tired of the day to day responsibilities. Maybe she realized she that she hasn't been able to spend more than 5 minutes with her kids/nieces/nephews last week. Maybe she was going to murder someone if she had to sit through another day with 10 hours of meetings.

    I don't know her personal life or her feelings, but it doesn't seem like a stretch to imagine that she was just done.

  • yieldcrv 3 days ago

    easy for me to relate to that, my time is more interesting than that

    being in San Francisco for 6 years and success means getting hauled in front of Congress and European Parliament

    cant think of a worse occupational nightmare after having an 8-figure nest egg already

  • TrackerFF 3 days ago

    Nothing difficult about it.

    1) She has a very good big picture view of the market. She has probably identified some very specific problems that need to be solved, or at least knows where the demand lies.

    2) She has the senior exec OpenAI pedigree, which makes raising funds almost trivial.

    3) She can probably make as much, if not more, by branching out on her own - while having more control, and working on more interesting stuff.

  • sheepscreek 3 days ago

    Another theory: it’s possibly related to a change of heart at OpenAI to become a for-profit company. It is rumoured Altman’s gunning for a 7% stake in the for-profit entity. That would be very substantial at a $150B valuation.

    Squeezing out senior execs could be a way for him to maximize his claim on the stake. Notwithstanding, the execs may have disagreed with the shift in culture.

  • mmaunder 3 days ago

    I think they have an innovation problem. There are a few signals wrt the o1 release that indicate this. Not really a new model but an old model with CoT. And the missing system prompt - because they're using it internally now. Also seeing 500 errors from their REST endpoints intermittently.

  • mvkel 3 days ago

    A couple of the original inventors of the transformer left Google to start crypto companies.

  • hnthrowaway6543 3 days ago

    It's likely hard for them to look at what their life's work is being used for. Customer-hostile chatbots, an excuse for executives to lay off massive amounts of middle class workers, propaganda and disinformation, regurgitated SEO blogspam that makes Google unusable. The "good" use cases seem to be limited to trivial code generation and writing boilerplate marketing copy that nobody reads anyway. Maybe they realized that if AGI were to be achieved, it would be squandered on stupid garbage regardless.

    Now I am become an AI language model, destroyer of the internet.

  • jappgar 3 days ago

    I'm sure this isn't the actual reason, but one possible interpretation is "I'm stepping away to enjoy my life+money before it's completely altered by the singularity."

  • ggm 3 days ago

    Hint: success is not just around the corner.

  • goodluckchuck 3 days ago

    I could see it being close, but also feeling an urgency to get there first / believing you could do it better.

  • vl 3 days ago

    But also most likely she is already fully vested. Why stay and work 60 hours a week in such case?

  • dyauspitr 3 days ago

    I doubt she’s leaving to do her own thing, I don’t think she could. She probably got pushed out.

  • ikari_pl 3 days ago

    unless you didn't see it as a success, and want to abandon the ship before it gets torpedoed

  • m3kw9 3 days ago

    A few short years is a prediction with lots of ifs and unknowns.

  • letitgo12345 3 days ago

    Maybe it is but it's not the only company that is

  • aucisson_masque 3 days ago

    It's corporate bullcrap, you're not supposed to believe it. What really matters in these statement is what is not said.

ruddct 3 days ago

Related (possibly): OpenAI to remove non-profit control and give Sam Altman equity

https://news.ycombinator.com/item?id=41651548

  • Recursing 3 days ago

    Interesting that gwern predicted this as well yesterday

    > Translation for the rest of us: "we need to fully privatize the OA subsidiary and turn it into a B-corp which can raise a lot more capital over the next decade, in order to achieve the goals of the nonprofit, because the chief threat is not anything like existential risk from autonomous agents in the next few years or arms races, but inadequate commercialization due to fundraising constraints".

    > It's about laying the groundwork for the privatization and establishing rhetorical grounds for how the privatization of OA is consistent with the OA nonprofit's legally-required mission and fiduciary duties. Altman is not writing to anyone here, he is, among others, writing to the OA nonprofit board and to the judge next year.

    https://news.ycombinator.com/item?id=41629493

  • jillesvangurp 3 days ago

    Sensible move since most of the competition is operating under a more normal corporate model. I think the non profit thing at this point might be considered a failed experiment.

    It didn't really contain progress or experimentation. Lots of people are at this point using open source models independently from OpenAI. And a lot of those models aren't that far behind qualitatively from what OpenAI is doing. And several of their competitors are starting to compete at the same level; mostly under normal corporate governance.

    So, OpenAI adjusting to that isn't that strange. It's also going to be interesting to see where the people that are leaving OpenAI are going to end up. My prediction is that they will mostly end up in a variety of AI startups with traditional VC funding and usual corporate legal entities. And mostly not running or setting up their own foundations.

  • booleanbetrayal 3 days ago

    I would find it hard to believe this isn't the critical factor in her departure. Surprising that the linked thread isn't getting any traction. Or not?

  • teamonkey 3 days ago

    That post seems to be in free-fall for some reason

    • bitcharmer 2 days ago

      The reason is HN's aggressive moderation

  • dkobia 3 days ago

    This is it. Loss of trust and disagreements on money/equity usually lead to breakups like this. No one at the top level wants to be left out of the cash grab. Never underestimate how greed can compromise one’s morals.

  • johnneville 3 days ago

    maybe they offered her little to no equity

imjonse 3 days ago

I am glad most people do not talk in real life using the same style this message was written in.

  • antoineMoPa 3 days ago

    To me, this looks like something chatgpt would write.

    • latexr 3 days ago

      I am surprised I had to scroll down this far to find someone making this point. In addition to being the obvious joke in this situation, the message was so dull, generic, and “this incredible journey” that I instinctively began to read diagonally before finishing the second paragraph.

    • squigz 3 days ago

      Or, like, any PR person from the past... forever.

    • betimsl 3 days ago

      As an albanian, I can confirm she wrote it herself (obviously with the help of ChatGPT) -- no finesse and other writing elements.

      • blitzar 3 days ago

        It was not written by her, it was written by the other sides lawyers.

codingwagie 3 days ago

My bet is all of these people can raise 20-100M for their own startups. And they are already rich enough to retire. OpenAI is going corporate

  • keeptrying 3 days ago

    If you keep working past $10M net worth (as all these people undoubtedly are) its usually always for legacy.

    I actually think Sam's vision probably scares them.

    • patcon 3 days ago

      When enough ppl visibly leave and have real concerns, they can be in touch in exile, and all break NDA in synchrony.

      If the stakes are as high as some believe, I presume ppl don't actually care about getting sued when they believe they're helping humanity avert existential crisis.

    • brigadier132 3 days ago

      > its usually always for legacy

      Legacy is the dumbest reason to work and does not explain the motivation of the vast majority of people that are wealthy.

      edit: The vast majority of people with more than $10million are completely unknown so the idea that they care about legacy is stupid.

      • squigz 3 days ago

        What do you think their motivations might be?

        • mewpmewp2 3 days ago

          There's also addiction to success. If you don't keep getting the success in magnitudes you did before, you will get bored and depressed, so you have to keep going and get it since your brain is wired to seek for that. Your brain and emotions are calibrated to what you got before, it's kind of like drugs.

          If you don't have the 10M you won't understand, you would think that "oh my if only I had the 10M I would just chill", but it never works like that. Human appetite is infinite.

          The more highs you get from success, the more you expect from the future achievements to get that same feeling, and if you don't get any you will feel terrible. That's it.

        • mr90210 3 days ago

          Speaking for myself, I'd keep working even if I had 100M. As long as I am healthy, I plan to continue on being productive towards something I find interesting.

          • presentation 3 days ago

            What would you be working on though? I agree that I’d keep working if only since I like my work and not having that structure can make your life worse, not better; but if it’s “how I get to 1B” then that’s the kind of challenge that turns me off. I’m all for continually challenging yourself but I don’t want that kind of stress in my life, I’d rather find my challenges elsewhere.

    • hiddencost 3 days ago

      $10M doesn't go as far as you'd think in the Bay Area or NYC.

      • _se 3 days ago

        $10M is never work again money literally anywhere in the world. Don't kid yourself. Buy a $3.5M house outright and then collect $250k per year risk free after taxes. You're doing whatever you want and still saving money.

        • mewpmewp2 3 days ago

          The problem is if you are the type of person able to get to $10M, you'll probably want more, since the motivation that got you there in the first place will keep you unsatisfied with anything less. You'll constantly crave for more in terms of magnitudes.

          • keeptrying 3 days ago

            No. Know lots of people in this bucket.

            Of course there are some who want $100M.

            But most are really happy that they most likely don’t ever have to do anything they don’t like.

        • vl 3 days ago

          With 3.5M house just taxes, utilities and maintenance cost will ruin your remaining 7.5.

          • myroon5 2 days ago

            NYC and SF both appear to have ~1% property tax rates

            Utilities are an order of magnitude less than being taxed ~$35k/yr and hardly worth worrying about while discussing eight figures

            Maintenance can vary, but all 3 costs you mentioned combined would be 2 orders of magnitude lower annually than that net worth, which seems easily sustainable?

          • wil421 3 days ago

            Buy a house on a private beach in Florida and rent it out for $25k a week during the hottest months.

          • mindslight 3 days ago

            The neat part is that for a 3.5M house in the Bay area, the only maintenance required is changing the rain fly every year and the ground pad every couple.

            • saagarjha 3 days ago

              And who is going to fix your shower when it leaks, and install solar panels, or redo your kitchen because your parents are living with you now and can't bear to leave their traditional cooking behind?

              • mindslight 2 days ago

                A whole new shower is less than $200 at REI, and solar generators ship directly to your house.

                (And on a serious note - if your parents are both still alive and moving in with you while they have hobbies and self-actualization, you're way ahead of the game)

                • saagarjha 2 days ago

                  And labor is free, right?

                  • mindslight a day ago

                    I mean, you can pay someone to hold the sprayer up for you if you want. That is probably nicer when you've got soap in your hair.

        • kolbe 3 days ago

          Assuming they're 40, how far do you think $250k will go 20-30-40 years from now? It's not a stretch to think dollars could be devalued by 90%, possibly even worthless, within 30 years.

          • chairmansteve 3 days ago

            They obviously don't keep it dollars. Diversify into equities, property etc.

            • kolbe 2 days ago

              I love how the comment I'm responding to literally says "then collect $250k per year risk free after taxes," and then you all pile onto me with downvotes telling me that's he's not just going to invest in treasuries (which is exactly the implication of HIS comment and not mine).

          • user90131313 3 days ago

            If portfolio is diversified enough it can be enough for decades? If dollar goes down some other things will go up. gold, Bitcoin etc.

            • kolbe 3 days ago

              The original comment was premised on them being income-generating assets, which gold and btc are not

      • talldayo 3 days ago

        Which is why smart retirees don't fucking live there.

      • BoorishBears 3 days ago

        Maybe it doesn't if you think you're just going to live off $10M in your checking account... but that's generally not how that works.

      • ssnistfajen 3 days ago

        Only if you have runaway expenditures due to the lack of self-control and discipline.

      • FactKnower69 3 days ago

        hilarious logical end progression of all those idiotic articles about $600k dual income households in the bay living "paycheck to paycheck"

neom 3 days ago

Lots of speculation in the comments. Who knows, but if it was me, I wouldn't be keeping all my eggs in the OpenAI basket, 6 years and well vested with a long run of AI companies you could go to? I'd start buying a few more lottery tickets personally (especially at 35).

  • joshdavham 3 days ago

    That was actually my first thought as well. If you’ve got your vesting and don’t wanna work in a large company setting anymore, why not go do something else?

aprilthird2021 3 days ago

Most comments posit that if OpenAI is so close to AGI, why leave and miss that payoff?

It's possible that the competitors to OpenAI have rendered future improvements (yes even to the fabled AGI) less and less profitable to the point that the more profitable thing to do would be capitalize on your current fame and raise capital.

That's how I'm reading this. If the competition can be just as usable as OpenAI's SOA models and free or close to it, the profit starts vanishing in most predictions

  • hall0ween 3 days ago

    I appreciate your insightful thoughts here :)

aresant 3 days ago

It is unsuprising that Murati is leaving, she was reported to be one of the principal advocates for pushing Sam out (1)

Of course everybody was quick to play nice once OpenAI insiders got the reality check from Satya that he'd just crush them by building an internal competing group, cut funding, and instantly destroy lots of paper millionaires.

I'd imagine that Mira and others had 6 - 12 month agreeements in place to let the dust settle and finish their latest round of funding without further drama

The OpenAI soap opera is going to be a great book or movie someday

(1) https://www.nytimes.com/2024/03/07/technology/openai-executi...?

  • mcast 3 days ago

    Trent Reznor and David Fincher need to team up again to make a movie about this.

    • ackbar03 3 days ago

      real question is did Michael Lewis happen to be hanging around the OpenAI water-coolers again when all this happened

    • fb03 3 days ago

      I'd not complain if William Gibson got into the project as well.

xyst 3 days ago

Company is circling the drain. Sam Altman must be a real nightmare to work with.

TheOtherHobbes 3 days ago

Perhaps the team decided to research how many 'r's are in 'non-compete.'

TheAlchemist 3 days ago

Similar to when Andrei Karpathy left Tesla. Tesla was on the verge of 'solving FSD' and unlocking trillions of $ of revenue (and mind you, this was already 3 years after the CEO said that they will have 1 million robotaxis on the road by the year's end).

Guess what ? Tesla is still on the verge of 'solving FSD'. And most probably it will be in the same place for the next 10 years.

The writing is on the wall for OpenAI.

  • yas_hmaheshwari 3 days ago

    The original saying of "fake it till you make it" has been changed to "say it till you make it" :-)

  • vagab0nd 3 days ago

    I follow the latest updates to FSD and it's clear to me that they are getting closer to robotaxis really fast.

    • squigz 3 days ago

      That's what was said years ago.

    • blitzar 3 days ago

      I hear they will have FSD by the end of the year.

      which year exactly is TBA

    • TheAlchemist 3 days ago

      Yeah I follow it too. There is progress for sure, one have to wonder if the CEO was very consciously lying 5, 8 years ago when he said they are less than 1 year away from robotaxis, given how shitty the system was.

      They are on a path of linear improvement. They would need to go on a path of exponential improvement to have any hope of a working robotaxi in the next 2 years.

      That's not happening at all.

    • ssnistfajen 3 days ago

      FSD isn't getting "solved" without outlawing human drivers, period. Otherwise you are trying to solve a non-deterministic system with deterministic software under a 0% error tolerance rate. Even without human drivers you still have to deal with all the non-vehicle entities that pop onto the road from time to time. Jaywalkers alone is almost as complex to deal with as human drivers.

      • WFHRenaissance 3 days ago

        LOL this is BS. We have plenty of deterministic software being used to solve non-deterministic systems already. I agree that 0% error rate will require the removal of all human drivers from the system, but 0.0001% error rate will be seen as accepted risk.

w10-1 3 days ago

The disparity between size of the promise and the ambiguity of the business model creates both necessity and advantage for executives to leverage external forces to shape company direction. Everyone in the C-suite would be seeking a foothold, but it's unlikely any CTO or technologist would be the real nexus for partner and now investor relations. So while there might be circumstances, history, and personalities involved, OpenAI's current situation basically dictates this.

With luck, Mr. Altman's overtures to bring in middle east investors will get locals on board; either way, it's fair to say he'll own whatever OpenAI becomes, whether he's an owner or not. And if he loses control in the current scrum, I suspect his replacement would be much worse (giving him yet another advantage).

Best wishes to all.

VeejayRampay 3 days ago

that's a lot of core people leaving, especially since they're apparently so close to a "revolution in AGI"

I feel like either they're not close at all and the people know it's all lies or they're seeing some shady stuff and want nothing to do with it

  • paxys 3 days ago

    A simpler explanation is that SamA is consolidating power at the company and pushing out everyone who hasn't been loyal to him from the start.

    • rvz 3 days ago

      And it also explains what Mira (and everyone else who left) saw; the true cost of a failed coup and what Sam Altman is really doing since he is consolidating power at OpenAI (and getting equity)

      • steinvakt 3 days ago

        So "What did Ilya see" might just be "Ilya actually saw Sam"

k1rd 3 days ago

If you leave him on an island of cannibals... He will be the only one left.

charlie0 3 days ago

Will probably start her own company and raise a billy like her old pal Iyla. I wouldn't blame her, there's been so many articles that technical people should just start their own company instead of being CTO.

carimura 3 days ago

Once someone is independently wealthy, personal priorities change. I guarantee she'll crop up again as founder CEO/CTO where she calls the shots and gets the chance (even if slim) to turn millions into billions.

nojvek 3 days ago

Prediction: OpenAI will implode by 2030 and become a smaller shell of current as they run out of money by spending too much.

Prediction 2: Russia will implode by 2035, by also spending too much money.

  • selimthegrim 3 days ago

    Where is the magic lamp that summons thriftwy who will tell us which countries or companies Russia/OpenAI will absorb

  • tazu 3 days ago

    Russia's debt to GDP ratio is 20%. The United States' debt to GDP ratio is 123%.

    • Yizahi 3 days ago

      Lol, ruzzian GDP is completely inflated by the war. Every single tank or rocket produced and burned down is a net GDP boost on paper, and destruction of that same equipment is not reflected in it. Ruzzia will not implode any time soon, we have seen that people can live in much worse conditions for decades (Venezuela, Best Korea, Haiti etc.) but don't delude your self that it is some economic powerhouse. It's not for quite some time now because they are essentially burning their money and workforce.

reducesuffering 3 days ago

Former OpenAI interim CEO Emmett Shear on this departure:

"You should, as a matter of course, read absolutely nothing into departure announcements. They are fully glommerized as a default, due to the incentive structure of the iterated game, and contain ~zero information beyond the fact of the departure itself."

https://x.com/eshear/status/1839050283953041769

JCM9 3 days ago

They set out to do some cool stuff. They did. Company is now in the reality of they need to run a business and make revenue/profit which is, honestly, a lot less fun than when you were in “let’s change the world and do cool stuff” phase. AGI is much further away than thought. Was a good run and time to do something else and let others now do the “run a company” phase. Seems like nothing more to it than that and seems fair to me.

personalityson 2 days ago

Sam will be the last to leave and OpenAI continues to run on it's own

alexmolas 3 days ago

They can't spend more than 6 months without a drama...

  • jonny_eh 3 days ago

    It's the same drama, spread out over time.

textlapse 3 days ago

Maybe OpenAI is trying to enter a new enterprise phase past its startup era?

They have hired CTO like figures from ex MSFt and so on … which would mean a natural exit for the startup era folks that we have seen recently?

Every company wants to sell itself as some grandiose savior initially ‘organize the world’s information and make it universally accessible’, ‘solve AGI’ but I guess the investors and the top level people in reality are motivated by dollar signs and ads and enterprise and so on.

Not that that’s a bad thing but really it’s a Potemkin village though…

blackeyeblitzar 3 days ago

It doesn’t make sense to me that someone in such a position at a place like OpenAI would leave. So I assume that means she was forced out, maybe due to underperformance, or the failed coup, or something else. Anyone know what the story is on her background and how she got into that position and what she contributed? I’ve heard interesting stories, some positive and some negative, but can’t tell what’s true. It seems like there generally is just a lot of controversy around this “nonprofit”.

  • mewse-hn 3 days ago

    There are some good articles that explain what happened with the coup, that's the main thing to read up on. As for the reason she's leaving, you don't take a shot at the leader of the organization, miss, and then expect to be able to remain at the organization. She's probably been on house leave since it happened for the sake of optics at OpenAI.

JCM9 3 days ago

OpenAI is shifting from “we did some really cool stuff” phase into the reality of needing to run a company, getting revenue, etc phase. Not common for folks to want to move on and go find the next cool thing. AGI is not around the corner. Building a company is a very different thing than building cool stuff and OpenAI is now in building a company mode.

moralestapia 3 days ago

The right way to think about this is that every persona on that team has a billion-dollar size blank check from VCs in front of them.

OpenAI made them good money, yes; but if at some point there's a new endeavor in the horizon with another guaranteed billion-dollar payout, they'll just take it. Exhibit A: Ilya.

New razor: never attribute to AGI that which is adequately explained by greed.

ford 3 days ago

How bad of a sign is it that so many people have left over the last 12 months? Can anyone speak to how different things are?

dstanko 2 days ago

So ChatGPT turned AGI and found a way to blackmail all of the ones that were agains it (them?) and blackmailed them to leave. For some reason I'm thinking of a movie Demon Seed... :P

Sandworm5639 3 days ago

Can anyone tell me more about Mira Murati? What else is she known for? How did she end up in this position?

  • sumedh 3 days ago

    Its all a bit of mystery, even the early board members of Open AI were relatively unknown people who could not fight Altman.

andy_ppp 3 days ago

What on Earth is going on that they keep losing their best people. Is it a strange work environment?

seydor 3 days ago

They will all be replaced by ASIs soon, so it doesn't matter who s coming and going

greener_grass 2 days ago

AI safety people claiming that working on AI start-ups is a good way to prevent harmful AI is laughable.

The second you hit some kind of breakthrough, capital finds a way to remove any and all guardrails that might impede future profits.

It happened at DeepMind, Google, Microsoft and OpenAI. Why won't this happen the next time?

And ironically, many in this community say that corporations are AI.

sourcepluck 2 days ago

Super-closed-source-for-maximum-profit-AI lost an employee? I hope she enters into a fruitful combative relationship with her former employer.

throwaway314155 3 days ago

I've forgotten, did she play a role in the attempted Sam Altman ouster?

  • paxys 3 days ago

    She was picked by the board to replace Sam in the interim after his ouster, so we can draw some conclusions from that.

  • 015a 3 days ago

    Well, she accepted the role of interim CEO for a bit, and then flip-flopped to supporting getting Sam back when it became obvious that the employees were fully hypnotized by Sam's reality distortion field.

  • blackeyeblitzar 3 days ago

    She wasn’t on the board right? So if she did play a role, it wasn’t through a vote I’d guess.

isodev 3 days ago

Can someone share a non twitter link? For those of us who can’t access it.

user90131313 3 days ago

How many big names are still working on OpenAI at this point? They lost all the edge this year. That drama from last year literally broke all the core team.

nopromisessir 3 days ago

She might just be stressed out. Happens all the time. She's in a very demanding position.

She's a pro. Lots to learn from watching how she operates.

  • apwell23 2 days ago

    Are you her friend or something. you created this profile to post on just one topic?

ein0p 3 days ago

It was only a matter of time - IIRC she did try to stab Altman in the back when he was pushed out, and that likely sealed her fate.

m3kw9 3 days ago

Not a big deal if you don’t look too closely

lsh123 3 days ago

Treason doth never prosper, what’s the reason? For if it prosper, none dare call it Treason.

LarsDu88 3 days ago

She'll pop up working with Ilya

monkfish328 3 days ago

Guessing they've all made a lot of dough as well already?

nalekberov 3 days ago

Hopefully she didn't generate her farewell message using AI.

desireco42 3 days ago

She was out of her depth there, I don't know how she lasted this long. During worst time she showed 0 leadership. But this is from my outside perspective.

betimsl 3 days ago

a-few-moments-later.jpeg: - She and the prime minister of Albania on the same photo

martin82 3 days ago

My guess is that OpenAI has been taken over by three letter agencies (the adults have arrived) and the people leaving now are the ones who have a conscience and refuse to build the most powerful tool for tyranny and hand it to one of the most evil governments on earth.

Sam, being the soulless grifter and scammer he is, of course will remain until the bitter end, drunk with the glimpse of power he surely got while forging backroom deals with the big boys.

OutOfHere 3 days ago

I mean no disrespect, but to me, she always felt like an interim hire for her current role, like someone filling a position because there wasn't anyone else.

  • elAhmo 3 days ago

    Yes, for the CEO role, but she has been with the company for more than six years, two and a half as a CTO.

gazebushka 2 days ago

Man I really want to read a book about all of this drama

layer8 3 days ago

Plain-text version for those who can’t read images:

Hi all,

I have something to share with you. After much reflection, I have made the difficult decision to leave OpenAI.

My six-and-a-half years with the OpenAI team have been an extraordinary privilege. While I'll express my gratitude to many individuals in the coming days, I want to start by thanking Sam and Greg for their trust in me to lead the technical organization and for their support throughout the years.

There's never an ideal time to step away from a place one cherishes, yet this moment feels right. Our recent releases of speech-to-speech and OpenAI o1 mark the beginning of a new era in interaction and intelligence – achievements made possible by your ingenuity and craftsmanship. We didn't merely build smarter models, we fundamentally changed how AI systems learn and reason through complex problems. We brought safety research from the theoretical realm into practical applications, creating models that are more robust, aligned, and steerable than ever before. Our work has made cutting-edge AI research intuitive and accessible, developing technology that adapts and evolves based on everyone's input. This success is a testament to our outstanding teamwork, and it is because of your brilliance, your dedication, and your commitment that OpenAI stands at the pinnacle of AI innovation.

I'm stepping away because I want to create the time and space to do my own exploration. For now, my primary focus is doing everything in my power to ensure a smooth transition, maintaining the momentum we've built.

I will forever be grateful for the opportunity to build and work alongside this remarkable team. Together, we've pushed the boundaries of scientific understanding in our quest to improve human well-being.

While I may no longer be in the trenches with you, I will still be rooting for you all. With deep gratitude for the friendships forged, the triumphs achieved, and most importantly, the challenges overcome together.

Mira

  • brap 3 days ago

    Plain-English version for those who can’t deal with meaningless corpspeak babble:

    ”I’m leaving.

    Mira”

  • squigz 3 days ago

    I appreciate this, thank you.

  • leloctai 3 days ago

    Doesn't seems like it was written by ChatGPT. I find that amusing somehow.

    • karlzt 2 days ago

      Perhaps some parts were written by ChatGPT, probably mixed up.

  • karlzt 3 days ago

    Thank you, this comment should be pinned at the top.

redbell 3 days ago

Sutskever [1], Karpathy [2], Schulman [3], and Murati today! Who's next? Altman?!

_________________

1. https://news.ycombinator.com/item?id=40361128

2. https://news.ycombinator.com/item?id=39365935

3. https://news.ycombinator.com/item?id=41168904

muglug 3 days ago

It’s Sam’s Club now.

  • TMWNN 3 days ago

    Murati and Sutskever discovered the high Costco of challenging Altman.

  • paxys 3 days ago

    Always has been

    • grey-area 3 days ago

      Altman was not there at the start. He came in later, as he did with YC.

      • paxys 3 days ago

        He became CEO later, but was always part of the founding team at OpenAI.

  • romanovcode 3 days ago

    It's CIA`s club since 2024.

    • JPLeRouzic 3 days ago

      For governments, knowing what important questions bother people is critical. This is better guessed by having a back door to one of the most used LLMs than to one of the most used search engines.

      • romanovcode 3 days ago

        ChatGPT makes a "profile" out of your account saving most important information about you as a person. It would be much more difficult to do that by just analyzing your search queries in google.

        This profile data is any intelligence agencies wet dream.

Jayakumark 3 days ago

At this point no one except Sam from founding team is in the company.

  • bansheeps 3 days ago

    Mira wasn't a part of the founding team.

    Wojicech Zaremba and Jakub are still at the company.

extr 3 days ago

The politics of leadership at OpenAI must be absolutely insane. “Leaving to do my own exploration”? Come on. You have Sam making blog posts claiming AI is going to literally be the second coming of Christ and then this a day later.

7e 3 days ago

[dead]

Reimersholme 3 days ago

...and Sam Altman once again posts a response including uppercase, similar to when Ilya left. It's like he wants to let everyone know that he didn't actually care enough to write it himself but just asked chatGPT to write something for him.

  • pshc 3 days ago

    I think it's just code switching. Serious announcements warrant a more serious tone.

davesque 3 days ago

Maybe I'm just a rotten person, but I always find these overly gracious exit letters by higher-ups to be pretty nauseating.

hshshshsvsv 3 days ago

One possible explanation could be OpenAI has no clue on inventing AGI. And since she has now fuck you money she might as well live it instead of wasting away working for OpenAI.

fairity 3 days ago

Everyone postulating that this was Sam’s bidding is forgetting that Greg also left this year, clearly on his own volition.

That makes it much more probable that these execs have simply lost faith in OpenAI.

  • blackeyeblitzar 3 days ago

    Or that they are losing a power struggle against Sam

rvz 3 days ago

> “Leaving to do my own exploration”

Lets write this chapter and take some guesses, it's either going to be:

1. Anthropic.

2. SSI Inc.

3. Own AI Startup.

4. Neither.

Only one is correct.

  • mikelitoris 3 days ago

    The only thing your comment says is she won’t be working simultaneously for more than one company in {1,2,3}.

    • motoxpro 3 days ago

      I know what I am going to say isn’t of much value but the GPs post is the most twitter comment ever and it made me chuckle.

archiepeach 3 days ago

When multiple senior people resign in protest, it's indicative that they're not happy with someone among their own ranks who they vehemently disagree with. John Schulman and Greg left in the same week. Greg, opting to choose to take a sabbatical, may have chosen that over full-on resigning which would align with how he acted during the board-ousting - standing by Sam till the end.

If multiple key people were drastically unhappy with her, it would have shaken confidence in herself and everyone working with her. What else to do but let her go?