I love this essay, and it very closely mirrors my own feelings about the topic. Thanks for writing/sharing it.
I’m perpetually bamboozled by my fellow software engineering colleagues who insist on proudly shouting from the rooftops “Look at me, ma! I’m vibe coding!” as if it’s some badge of honor to see who can churn out the greatest quantities of shitcode the fastest and completely surrender any last scraps of their cognitive abilities to the best LLM provider of the current moment.
Funny you mention that, I just wrote a bunch of arithmetic functions in js because floats cant be trusted. It was a tad more work than I expected but thats fine. Even I thought it was a tad silly but when I had to check if the results were correct and compared them in some big number calculators from the top of google it turned out not to match.
What do you think? 2/3 with 10 digits behind the dot in accuracy. Should that be 0.6666666666 or 0.6666666667 ? If I later add 1/3 to it my result is 1 which seems more correct?
> In one of his podcasts, Ezra Klein said that he thinks the “message” of generative AI (in the McLuhan sense) is this: “You are derivative.” In other words: all your creativity, all your “craft,” all of that intense emotional spark inside of you that drives you to dance, to sing, to paint, to write, or to code, can be replicated by the robot equivalent of 1,000 monkeys typing at 1,000 typewriters. Even if it’s true, it’s a pretty dim view of humanity and a miserable message to keep pounding into your brain during 8 hours of daily software development.
I think this is a fantastic point well summarised. I see people coming out of the woodwork here on HN, especially when copyright is discussed in relation to LLMs, to say that there's no difference between human creativity and what LLMs do. (And therefore of course training LLMs on everything is fair use.) I'm not here to argue against that point of view, just to illustrate what this "message" means.
I feel fairly similar to Nolan and to this day haven't really started using LLMs in a major way in my work.
I do occasionally use it when I might have previously gone to Stack Overflow. Today I asked it a mildly tricky TypeScript generic wrangling question that ended up using the Extract helper type.
However, I'm also feeling the joy of coding isn't quite what it used to be as I move along in my career. I really feel great about finding the right architecture for a problem, or optimising something that used to be a roadblock for users until it's hardly noticeable. But so much work can just be making another form, another database table, etc. And I am always teetering back and forth between "just write the easy code (or get an AI to generate it!)" and "you haven't found the right architecture that makes this trivial".
> all your creativity, all your “craft,” all of that intense emotional spark inside of you that drives you to dance, to sing, to paint, to write, or to code
Thanks for this, brother. This translated through the internets and punched me right in the feels.
In some ways I feel sad for all the linguistics and NLP people, though the Bayesians had their day too.
However after 20 years in RL I’m glad that RL (MDP solving) is finally coming into view as the primary generalist intelligence framework.
People made fun of us RL people because the data/compute overhead but now that every robotic controller and system is becoming RL tuned it’s just going to chew its way through the whole computing stack eventually.
I see so many people in AI burning out because they see all these narrow methods failing to generalize.
I love this essay, and it very closely mirrors my own feelings about the topic. Thanks for writing/sharing it.
I’m perpetually bamboozled by my fellow software engineering colleagues who insist on proudly shouting from the rooftops “Look at me, ma! I’m vibe coding!” as if it’s some badge of honor to see who can churn out the greatest quantities of shitcode the fastest and completely surrender any last scraps of their cognitive abilities to the best LLM provider of the current moment.
If it makes you feel better, I'm still typing out things by hand that just work using notepad and ftp.
I suspect this is a bit of tongue in cheek about the nature of tools improving over time
The difference is that I absolutely could write code by hand in notepad and upload them to a server via ftp if I had to
I think it is a safe bet that people who learn to code with AI agents are not going to have the skills to code without them
> I suspect this is a bit of tongue in cheek
no
Absolutely hilarious! I bet you forego using calculators and do all of your arithmetic by hand, too. Such a clever analogy!
Funny you mention that, I just wrote a bunch of arithmetic functions in js because floats cant be trusted. It was a tad more work than I expected but thats fine. Even I thought it was a tad silly but when I had to check if the results were correct and compared them in some big number calculators from the top of google it turned out not to match.
What do you think? 2/3 with 10 digits behind the dot in accuracy. Should that be 0.6666666666 or 0.6666666667 ? If I later add 1/3 to it my result is 1 which seems more correct?
> In one of his podcasts, Ezra Klein said that he thinks the “message” of generative AI (in the McLuhan sense) is this: “You are derivative.” In other words: all your creativity, all your “craft,” all of that intense emotional spark inside of you that drives you to dance, to sing, to paint, to write, or to code, can be replicated by the robot equivalent of 1,000 monkeys typing at 1,000 typewriters. Even if it’s true, it’s a pretty dim view of humanity and a miserable message to keep pounding into your brain during 8 hours of daily software development.
I think this is a fantastic point well summarised. I see people coming out of the woodwork here on HN, especially when copyright is discussed in relation to LLMs, to say that there's no difference between human creativity and what LLMs do. (And therefore of course training LLMs on everything is fair use.) I'm not here to argue against that point of view, just to illustrate what this "message" means.
I feel fairly similar to Nolan and to this day haven't really started using LLMs in a major way in my work.
I do occasionally use it when I might have previously gone to Stack Overflow. Today I asked it a mildly tricky TypeScript generic wrangling question that ended up using the Extract helper type.
However, I'm also feeling the joy of coding isn't quite what it used to be as I move along in my career. I really feel great about finding the right architecture for a problem, or optimising something that used to be a roadblock for users until it's hardly noticeable. But so much work can just be making another form, another database table, etc. And I am always teetering back and forth between "just write the easy code (or get an AI to generate it!)" and "you haven't found the right architecture that makes this trivial".
Exactly this kind of blog posts feel human. I do think we are going to miss this sooner than we think.
> all your creativity, all your “craft,” all of that intense emotional spark inside of you that drives you to dance, to sing, to paint, to write, or to code
Thanks for this, brother. This translated through the internets and punched me right in the feels.
In some ways I feel sad for all the linguistics and NLP people, though the Bayesians had their day too.
However after 20 years in RL I’m glad that RL (MDP solving) is finally coming into view as the primary generalist intelligence framework.
People made fun of us RL people because the data/compute overhead but now that every robotic controller and system is becoming RL tuned it’s just going to chew its way through the whole computing stack eventually.
I see so many people in AI burning out because they see all these narrow methods failing to generalize.
Exciting times ahead.