axus 9 hours ago

The most important clue to solving a difficult problem is knowing that somebody else has already solved it.

  • Nevermark 6 hours ago

    I worked on a problem for a couple months once. As soon as my professor hit mid-sentence telling me he found someone with the solution, I rudely blurted it out.

    My mind was so familiar with all the constraints, all I had to know was that there was a solution and I knew exactly where it had to be.

    But before knowing there was a solution I hadn't realized that.

  • LPisGood 7 hours ago

    I had a professor in an additive combinatorics class that would (when appropriate) say “hint: it’s easy” and as silly as it is, it usually helped a lot.

  • baxtr 9 hours ago

    The problem is time and resources.

    Take building a viable company. You know that many people have solved this. But you also know that 9/10 fail.

    So you need the time and the money to try enough times to make it work.

    • shermantanktop 4 hours ago

      You're describing bruteforcing through repetition. The paper is essentially about increasing the chance of success by training model which learns on failure.

      That may not apply to a building a viable company directly. It might suggest that new companies should avoid replicating elements of failed companies.

    • djdjdhdh 8 hours ago

      9/10 vc backed companies fail. Not "companies." Ignore the hype and you'll be more likely to succeed.

      • stonemetal12 7 hours ago

        As far as I am aware it is 8/10 across the broader landscape. A little better, but not much.

        • fhuteedc 6 hours ago

          Twice as likely to succeed is not insignificant. It's a lot better chance to succeed. You're being led to by folks who want to make you their slave.

          https://clarifycapital.com/blog/what-percentage-of-businesse...

          That 80% number is after 20 years. That's far longer than almost anyone stays at the same employer. Maybe if those failures are the owners retiring.

          You're being lied to. The myths of silicon Valley are not there for the benefit of founders.

  • truelson 5 hours ago

    The 4 minute mile comes to mind

    • paulorlando 5 hours ago

      While Bannister’s 4-minute mile record is used as an example of a psychological barrier, there’s also a reinterpretation of the meaning behind his record. Before his 1954 race, the record for the mile stood at just over 4 minutes (4:01.4) for 9 years. While speed records were set during WWII, they were all set by Swedish runners (Sweden being neutral in the war). The record today, which has stood since 1999, is 3:43.13. It's not a round number, so as a result gets less attention. Maybe that's why we don't think of it as a psychological barrier.

      • NooneAtAll3 3 hours ago

        so it's all a question of marketing

        343 is 7 cubed, so just call it "cube barrier!" and it becomes a worthy challenge

abtinf 8 hours ago

> The [goal] of machine learning research is to [do better than humans at] theorem proving, algorithmic problem solving, and drug discovery.

Naively, one of those things is not like the others.

When I run into things like this, I just stop reading. My assumption is that a keyword is being thrown in for grant purposes. Who knows what other aspects of reality have been subordinated to politics by the writer.

  • dgacmu 7 hours ago

    These have all been stated as goals by various machine learning research efforts. And -- they're actually all examples in which a better search heuristic through an absolutely massive configuration space is helpful.

  • captainclam 7 hours ago

    You must not end up reading much scientific literature then.

  • LinuxAmbulance 7 hours ago

    What's the issue with drug discovery? AI/ML assisted drug discovery is one of the better examples of successful AI utilization out there.

richard___ 8 hours ago

How does this compare to just reducing the likelihood of negative samples?

qqxufo 6 hours ago

Failure doesn’t teach by default; it teaches only when you design for it. Three dials matter: cost, frequency, and observability.

Make failures cheap and reversible. Shrink the scope until a rollback is boring. If a failure requires a committee or a quarter to undo, you’ll avoid the very experiments you need.

Raise frequency deliberately. Schedule “bad ideas hour” or small probes so you don’t wait for organic disasters to learn.

Max out observability. Before you try, write the few assumptions the test could falsify. Log what would have changed your mind earlier (counterfactual triggers), not just what happened.

Two practices that compound:

1. Pre-mortem → post-mortem symmetry. In the pre-mortem, list concrete failure modes and “tripwires”; in the post-mortem, only record items that map back to one of those or add a new class with a guardrail/checklist—not “be more careful.”

2. Separate noise from surprise. Tag outcomes as variance vs. model error. Punishing variance breeds risk aversion; fixing model error improves judgment.

Hard problems rarely yield to heroics; they yield to lots of small, instrumented failures.

  • stuffn 5 hours ago

    AI post with emdashes removed.

    Clearly didn't send the article to the LLM.

  • dennisy 5 hours ago

    Is this related to the article?

    • nalllar 5 hours ago

      qqxufo's recent posts read like a large langle mangle to me

      • glompers 4 hours ago

        Not to me. This post in question could be easily expanded into a recognizable Paul Graham essay and no one would bat an eye.