This article gives off really strong chatgpt vibes. Bullet points! Confidently stated but quite vague statements! Integration, key innovations, more bullet points! Distinct lack of personal viewpoints or unique opinions! Very long!
Maybe I’m being uncharitable and this is just the way things are written on Medium, but man this was not an easy or particularly enlightening read.
Nah it's definitely AI generated. I used to like Medium a lot, but ever since AI Models became mainstream, it's just been a cesspool of horribly written articles.
I agree. I sometimes have LLMs produce such summaries for my own use—I don’t have time to read in full everything I’m interested in—but I wish they didn’t get posted to HN without being identified as AI-written.
I'm not sure if excessively bolding keywords is a sign of something being written by AI or just something being written by a person way more "hip with the times" than me.
The paper the article is based on [1] treats “Titans” as a plural: “Titans are capable of solving problems beyond TC^0, meaning that Titans are theoretically more expressive than Transformers….”
This is how I write, it’s how I was taught to write. It’s how ChatGPT writes because ChatGPT was trained to write in a clear way. Intro, paragraphs, and a conclusion are just table stakes for writing to persuade. List items are a very common way of communicating complex things. “Delve” is a common word.
By all means assume this was AI drafted (there are enough grammar mistakes that it has clearly been edited by a human with English as a second language), but this list of reasons is a bad one.
FWIW, Phil Wang (lucidrains) has been working on Titans reimplementation since roughly the day the paper was released. It looks to me from the repository that some of the paper's claims have not been reproduced yet, and reading between the lines, it might be Wang considers the paper to not be that groundbreaking after all -- hard to say definitively but the commit speed has definitely slowed down, and the last comments involve failing to replicate some of the key claims.
Unfortunately. The paper looks really good, and I'd like for it to be true.
It's a shame when potentially interesting papers don't hold up in practice. I've seen a few cases where the real-world performance didn't match the initial claims.
Yeah, that's the normal outcome for papers like this. Papers which claim to be groundbreaking improvements on Transformers universally aren't. Same story roughly once a month for the past 5 years.
Something like this, using Simon Willison's llm cli, will give you a better experience than such generated articles unless your goal is explicitly to read blogs.
wget https://arxiv.org/pdf/2501.00663 -O - | pdftotext - - | llm -s "Explain the following paper assuming I am familiar with deep learning" -m gpro
Advantages:
* put in any prompt you like
* pick any model (above gpro is alias for latest gemini pro if you have the llm-gemini plugin installed)
* ask for clarifications using llm chat -c (continue last conversation)
Their model on protein folding was obviously very impactful. Their model for new chemicals has to my understanding provided no value. Based on this track record, who knows.
I don't know. Google has some great papers but a lot more hot air. The best thing to do when you see a google paper is ignore it until someone you trust more writes a second paper about it.
To your own peril. Ignoring the stuff that comes out of Google caused Google itself harm. Some language is clickbait, some isn't. A Titan, that's a word if there ever was one. The Bitter Truth, a sentence if there ever was one. Language is a thing, and so I have to ask - why is language like this coming to these researchers?
Off topic:
Waymo is a verb, and Google owns it. The concept of AV will be overwhelmingly introduced to the world as Waymo. Not everything has to come out of left field.
I wish I could go back to a time where I thought this meant that it would lead to better health outcomes, not knowing what they actually mean is just reducing cost by decreasing accuracy.
This is seemingly unrelated to the Amazon Titan foundational models available on Bedrock. As if things weren’t confusing enough with OpenAI’s naming scheme…
It's Google, so it means healthcare as in the business not the practice, "help me squeeze more money out of people that desperately need this medicine to survive"
This article gives off really strong chatgpt vibes. Bullet points! Confidently stated but quite vague statements! Integration, key innovations, more bullet points! Distinct lack of personal viewpoints or unique opinions! Very long!
Maybe I’m being uncharitable and this is just the way things are written on Medium, but man this was not an easy or particularly enlightening read.
Nah it's definitely AI generated. I used to like Medium a lot, but ever since AI Models became mainstream, it's just been a cesspool of horribly written articles.
So are they generating blog from arxiv papers?
I agree. I sometimes have LLMs produce such summaries for my own use—I don’t have time to read in full everything I’m interested in—but I wish they didn’t get posted to HN without being identified as AI-written.
I'm not sure if excessively bolding keywords is a sign of something being written by AI or just something being written by a person way more "hip with the times" than me.
If true, it's got its grammar wrong:
>At its core, Titans merge two powerful mechanisms:
Should be "merges", since "Titans" as used here is a singular proper noun.
The paper the article is based on [1] treats “Titans” as a plural: “Titans are capable of solving problems beyond TC^0, meaning that Titans are theoretically more expressive than Transformers….”
[1] https://arxiv.org/pdf/2501.00663
Binoculars, a llm detector, gives a "most likely AI-generated".
100%
Header
Paragraph
List items
Delve, here’s why, conclusion.
This is how I write, it’s how I was taught to write. It’s how ChatGPT writes because ChatGPT was trained to write in a clear way. Intro, paragraphs, and a conclusion are just table stakes for writing to persuade. List items are a very common way of communicating complex things. “Delve” is a common word.
By all means assume this was AI drafted (there are enough grammar mistakes that it has clearly been edited by a human with English as a second language), but this list of reasons is a bad one.
And clearly has screenshots from ChatGPT with the same wording as the post itself.
Lazy research, lazy writing, disappointing but not surprising.
FWIW, Phil Wang (lucidrains) has been working on Titans reimplementation since roughly the day the paper was released. It looks to me from the repository that some of the paper's claims have not been reproduced yet, and reading between the lines, it might be Wang considers the paper to not be that groundbreaking after all -- hard to say definitively but the commit speed has definitely slowed down, and the last comments involve failing to replicate some of the key claims.
Unfortunately. The paper looks really good, and I'd like for it to be true.
https://github.com/lucidrains/titans-pytorch
It's a shame when potentially interesting papers don't hold up in practice. I've seen a few cases where the real-world performance didn't match the initial claims.
Yeah, that's the normal outcome for papers like this. Papers which claim to be groundbreaking improvements on Transformers universally aren't. Same story roughly once a month for the past 5 years.
Something like this, using Simon Willison's llm cli, will give you a better experience than such generated articles unless your goal is explicitly to read blogs.
wget https://arxiv.org/pdf/2501.00663 -O - | pdftotext - - | llm -s "Explain the following paper assuming I am familiar with deep learning" -m gpro
Advantages:
* put in any prompt you like
* pick any model (above gpro is alias for latest gemini pro if you have the llm-gemini plugin installed)
* ask for clarifications using llm chat -c (continue last conversation)
* no annoying random image
For another approach for this particular paper, Umal Jamil's videos are always high quality: https://www.youtube.com/watch?v=A6kPQVejN4o
Paper https://arxiv.org/abs/2501.00663
For those of you in the field, this seems like a big deal? ELI5
Their model on protein folding was obviously very impactful. Their model for new chemicals has to my understanding provided no value. Based on this track record, who knows.
I don't know. Google has some great papers but a lot more hot air. The best thing to do when you see a google paper is ignore it until someone you trust more writes a second paper about it.
To your own peril. Ignoring the stuff that comes out of Google caused Google itself harm. Some language is clickbait, some isn't. A Titan, that's a word if there ever was one. The Bitter Truth, a sentence if there ever was one. Language is a thing, and so I have to ask - why is language like this coming to these researchers?
Off topic:
Waymo is a verb, and Google owns it. The concept of AV will be overwhelmingly introduced to the world as Waymo. Not everything has to come out of left field.
I know they do good stuff, it's just the fire hose of bullshit is a lot wider than the tap of innovation.
True of most research as reported by the media https://phdcomics.com/comics/archive.php?comicid=1174
> Healthcare
I wish I could go back to a time where I thought this meant that it would lead to better health outcomes, not knowing what they actually mean is just reducing cost by decreasing accuracy.
This is seemingly unrelated to the Amazon Titan foundational models available on Bedrock. As if things weren’t confusing enough with OpenAI’s naming scheme…
My favorites are Nvidia Triton (server) and Openai Triton (compiler)
> opening doors to innovations in AI-driven reasoning, healthcare, and beyond.
You went with "healthcare"?
If by "You" you mean some LLM, then probably yes.
Mor businesses participating in health care should drive costs down.
Not if they’re additional middlepersons.
It's Google, so it means healthcare as in the business not the practice, "help me squeeze more money out of people that desperately need this medicine to survive"
[flagged]
Me too. Just hide the articles about AI from HN and move on.
"That's why we're working on an AI to hear about AI for you, so you don't have to!" - someone probably