cancerhacker 3 days ago

I am reminded of this immortal koan:

Sussman attains enlightenment

In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.

“What are you doing?”, asked Minsky.

“I am training a randomly wired neural net to play Tic-Tac-Toe” Sussman replied.

“Why is the net wired randomly?”, asked Minsky.

“I do not want it to have any preconceptions of how to play”, Sussman said.

Minsky then shut his eyes.

“Why do you close your eyes?”, Sussman asked his teacher.

“So that the room will be empty.”

At that moment, Sussman was enlightened.

  • 0x1ceb00da 3 days ago

    I didn't understand.

    • dbetteridge 3 days ago

      You're training it on data already, which by extension will contain information about the game.

      Training a randomly wired net is like pretending that the training data won't give it pre-conceptions, when the training data is what drives the model.

      (Pretending that closing ones eyes, removes all items from existence)

      • ninjin 3 days ago

        My interpretation has always been that any wiring of the network carries its own bias. Thus, a random wiring is biased as well. Your point still stands perfectly on its own though and now I am less sure about the interpretation I have had over all these years.

        • Dylan16807 3 days ago

          I'm sure you're right. "Preconception" is more specific than bias and would apply before the training starts. Something it learns from the training is not a preconception it had. And I wouldn't assume there even is "training data" that existed when the net was initialized.

tbrownaw 3 days ago

Yes this was very silly to the degree that it provides more reason to worry a bit about the future of their side projects (I kinda like Firefox), but I don't really see anything that sounds dangerous?

  • ffhhj 3 days ago

    The whole dot-com bubble burst was caused by lots of companies wasting money on stupid ideas at the same time.

    • bbarnett 3 days ago

      Huh? I thought it was caused by chairs.

      That said, I don't see any difference between dot-com bubbles busting, regardless of decade, and them not busting. From my perspective, I see stupid(along with good) all the time.

      • ffhhj 3 days ago

        Yeah anything can cause it, chairs, houses, people rethinking their investments:

        > On March 20, 2000, Barron's featured a cover article titled "Burning Up; Warning: Internet companies are running out of cash—fast", which predicted the imminent bankruptcy of many Internet companies.[48] This led many people to rethink their investments. That same day, MicroStrategy announced a revenue restatement due to aggressive accounting practices. Its stock price, which had risen from $7 per share to as high as $333 per share in a year, fell $140 per share, or 62%, in a day.[49] The next day, the Federal Reserve raised interest rates, leading to an inverted yield curve, although stocks rallied temporarily.[50]

        Keep the kool-aid flowing to avoid it, this can't happen twice, wait, what was that in 2008 again?

gepardi 3 days ago

Why is this author so mad?

  • seanhunter 3 days ago

    Because if they posted any sort of reasonable article it wouldn't get traction (including being posted here).

  • ThrowawayTestr 3 days ago

    People are inherently trusting LLMs without even remotely understanding them.

  • sandwitches 3 days ago

    Because AI bullshit is ruining everything we've ever loved?

    • rramadass 3 days ago

      Exactly! This is lunacy.

    • Grimblewald 2 days ago

      Id argue ai accelerates bullshit slingers output disproportionately to genuine creators so theyre flooding the space - but people are the problem not ai (yet)

  • lurking_swe 3 days ago

    probably because they stumbled upon yet another “AI” blog post that is a complete waste of time.

    I’d be annoyed too. At least this guys angry blog post is somewhat educational - reminding us of biases and how NOT to use LLM’s.

    all this being said, time to go outside and touch some grass. Get off the computer for a bit.

  • rsynnott 2 days ago

    I can't speak for them, obviously, but, argh, this AI bullshit is just getting so tedious. Like, the methodology described in the article is just _nonsense_.

    > "On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question"

    Like, people have known this doesn't work since literally the First Industrial Revolution. And yet still we try asking the magic oracle to unbias our data.

lolinder 3 days ago

I'm as sick of AI nonsense as anyone and I'm a chronic Mozilla critic (I mostly wish they'd just accept my donations for Firefox and focus exclusively on that). That said, this post is over the top.

If you read the blog post it's pretty obvious what happened. Someone at Mozilla.ai had an extra day on their hands and ran a bunch of text they'd collected through a few models. They thought "hey, this is kind of cool, let's make a blog post about it". Then they wrote one stupid line about their motivations (likely made up to justify playing around with local models) and get completely lambasted for that one stupid line.

I'd rather live in a world where people are comfortable throwing together a quick blog post detailing a fun/stupid project they did than one in which they do that anyway but are hesitant to share because people will rake them over the coals for being "unserious".

  • zem 3 days ago

    > Then they wrote one stupid line about their motivations (likely made up to justify playing around with local models) and get completely lambasted for that one stupid line.

    to be fair, it's worth lambasting them over it because they are perpetuating the myth that AI is bias free (which a lot of people actually do believe!) and putting the weight of mozilla's reputation behind it

    • ertian 3 days ago

      Yeah, I think the only real problem with this is that it was posted under the auspices of Mozilla's official foundation. If it was a personal blog post, it'd be just fine.

    • Cacti 3 days ago

      thanks but my reputation already has a colorway

    • Legend2440 3 days ago

      On the flip side: A lot of claims of AI bias are overblown and/or straight-up disingenuous.

      Remember when ProPublica claimed that recidivism prediction models were biased against black people? (https://www.propublica.org/article/how-we-analyzed-the-compa...)

      They rigged their analysis. There are two different fairness metrics - individual and group-based - that cannot possibly be fulfilled at the same time if there is any difference between the groups. (https://arxiv.org/abs/1609.05807v2)

      The model was fair all along according to individual metrics. ProPublica picked their fairness metric so as to make the model appear biased. The whole thing was a lie.

      • tbrownaw 3 days ago

        > On the flip side: A lot of claims of AI bias are overblown and/or straight-up disingenuous.

        The same is true regarding humans though, so "use a magic ai to avoid bias from humans" still doesn't make any sense.

  • tbrownaw 3 days ago

    > If you read the blog post it's pretty obvious what happened. Someone at Mozilla.ai had an extra day on their hands and ran a bunch of text they'd collected through a few models.

    That's certainly a possibility, but the post sure looks like it's trying to claim otherwise.

  • gepardi 3 days ago

    Agreed. Not sure why author is so enraged.

  • rendaw 3 days ago

    By going on the official blog Mozilla is putting their reputation behind it, be it one person or a team.

langsoul-com 3 days ago

Blog article is a bit edgy. It's more like modzilla.ai is an AI consultancy, that just uses AI for The buzz word effect.

Man, modzilla really do the most useless things. I'm really surprised just how bad they're at generating any profit, the silly ideas and products are wild.

Can't they just try and make Firefox the best, ubiquitous and one day content actually against Chrome?

  • Cacti 3 days ago

    they’re there to be googles bearded monopoly foil, nothing more

rainonmoon 3 days ago

The mention of objectivity in connection with LLM output is obviously farcical, but I'm curious about the motivation behind the experiment. Surely the value of speaking to organisations already deploying LLMs in live workplaces is identifying specific, solvable issues (aligned with Mozilla.ai's stated objectives) e.g. Project Zero's recent post about trying to make LLMs work in security research. Generalising those claims doesn't seem like a meaningful action, and as OP pointed out, doesn't provide any revelations to anyone with even a cursory view of the landscape. Mozilla's blog post ultimately seems more like marketing than a genuine attempt at research so from that lens it doesn't get me as heated as OP. But that is a tension Mozilla should be aware of if they're actually trying to build credibility with their blog if they're pushing SEO alongside whatever research they do end up publishing.

  • tbrownaw 3 days ago

    Hopefully the reality is that the interviews were for a serious attempt to learn things, and then the ai summaries were a separate "just for fun" thing that someone with a bit of downtime decided to do.

    But that's not what I read the Mozilla post that this post is ragging on as saying.

    • dialup_sounds 3 days ago

      When they described 50 pages of notes a "large text dataset" I definitely got the sense that they weren't planning on reading them.

chx 3 days ago

How could anything related to LLMs be serious?

https://hachyderm.io/@inthehands/112006855076082650

> You might be surprised to learn that I actually think LLMs have the potential to be not only fun but genuinely useful. “Show me some bullshit that would be typical in this context” can be a genuinely helpful question to have answered, in code and in natural language — for brainstorming, for seeing common conventions in an unfamiliar context, for having something crappy to react to.

> Alas, that does not remotely resemble how people are pitching this technology.

Yes, people get bamboozled because LLMs are trained to bamboozle them, Raskin didn't call them "a zero day vulnerability for the operating system of humanity" for nothing -- but that's all there is.

protocolture 3 days ago

The tiniest storm in the smallest teacup.

sillysaurusx 3 days ago

> Mozilla.ai is not a serious organization. It seems to be just another “AI” clown car.

If this is true, then I’m a court jester, because none of my projects started as serious work by a serious organization. And ML wasn’t lame until everyone started taking it so seriously.

The key with ML is to have fun. Even the most serious researcher has this motivation, even if they won’t admit it. If you could somehow scan their brain and look, you’d see that all the layers of seriousness are built around the core drive to have fun. Winning a dota 2 tournament was serious work, but I’ll wager any sum of money they picked dota because it seemed like a fun challenge.

If the author is looking for a serious AI organization, they should start one. Otherwise they’re not really qualified to say whether the work is bad. I have no opinion on Mozilla’s project here, but at a glance it looks well-presented with an interesting hypothesis. All of my work started with those same objectives, and it’s mistaken to discourage it.

The more people doing ML, the better. It’s not up to us to say what someone should or shouldn’t work on. It’s their own damn decision, and people can decide for themselves whether the work is worth supporting. Personally, I think summarizing a corporation’s knowledge is one of the more interesting unsolved problems, and this seems like a step towards it. Any step towards an interesting objective is de facto good.

Bias has become such an overrated concern. Yes, it matters. No, it’s not the number one most important problem to solve. I say this as someone raising a daughter. The key is to make interesting things while giving some thought ahead of time on how to make it more inclusive. Then pay close attention when you discover that some group of users doesn’t like it, and why. Then think of ways to fix it, and decide whether the cost is low enough.

There is always a cost. Choosing to focus on bias means that you’re not focusing on building new things. It’s a cost I try not to shy away from. But the author seems to feel that it’s the single most important priority, rather than, say, getting a useful summary of 16,ooo words. I think I’ll agree to disagree.

  • sillysaurusx 3 days ago

    (Obviously, if your model has massive consequences (e.g. deciding whether to deny coverage, or grant a loan) then you should spend proportionally more time thinking of whether your model is covering all of the blind spots that you naturally have. But the vast majority of models don’t matter that much. And that’s an important philosophy to preserve. Imagine if someone felt that every line of code you ever write needs to be perfect on the first try. Imagine trying to get anything done.)

  • lurking_swe 3 days ago

    If a mozilla employee is just having fun, and clearly has no clue what they’re doing (they are new to this), why not SAY THAT at the beginning of the blog post? Is transparency too much to ask?

    Ruins a company’s credibility…

    • sillysaurusx 3 days ago

      The point is "just having fun" is the way you get to AGI, if it’s possible at all. All the obvious paths are explored, so only those who truly love the work for its own sake will pursue it.

      If you thought that any researcher has a clue what they’re doing, I’m afraid you’ve been mistaken. We have hypotheses and observations from past experiments, but no one has any idea whether something will work until they try it. So by discouraging them from trying, you’re decreasing the likelihood that any useful work will be done at all.

adithyassekhar 3 days ago

Mozilla.ai website feels so sluggish on chrome on android that I thought I was using firefox. Not trying to be snarky, I genuinely had to recheck which browser I was on.

pipeline_peak 3 days ago

Mozilla? Isn’t that the company that made an HtML GUI toolkit back in the day?

woah 3 days ago

Here is the blog post the author is attacking: https://blog.mozilla.ai/uncovering-genai-trends-using-local-...

Is it groundbreaking? No. But the author's overwrought political rant about Mozilla, AI, the internet, and probably capitalism seems unwarranted based on a small blog post. From the "about" page of tante.cc seems like they are some kind tech/political/leftist/"luddite" commentator.

  • sandwitches 3 days ago

    This particular brand of Hackernewsian condescension is unreal. I'd be amazed if it weren't so depressing. Please don't belittle someone's very real grievances like this, thank you.

    • tbrownaw 3 days ago

      I'm sure the parent commenter's grievance is just as real as the article author's grievance.

    • woah 2 days ago

      "Very real grievances"? The grievance being that an intern at Mozilla threw their notes into a couple LLMs and wrote a blog post one day?

    • Grimblewald 2 days ago

      Airing greivances is fine and healthy but so is allowing a defence.

  • mrintegrity 2 days ago

    It seems clear that the author has an axe to grind, conflating the blog post and "Mozilla corporate policy" kinda gives away the bias