For years, the conventional wisdom for wide-eyed youngsters about to enter the job market was "learn to code." Now, it seems like some of the programmers themselves could use the same advice.

That's according to Namanyay Goel, an experienced developer who's not too impressed by the new generation of keyboard-clackers' dependence on newfangled AI models.

"Every junior dev I talk to has Copilot or Claude or GPT running 24/7. They're shipping code faster than ever," Goel wrote in a recent blog post, titled — fittingly — "New Junior Developers Can't Actually Code."

"Sure, the code works, but ask why it works that way instead of another way? Crickets," he wrote. "Ask about edge cases? Blank stares."

"The foundational knowledge that used to come from struggling through problems is just… missing," he added.

No doubt chalkboard-pounding algebra teachers once grumbled about calculators, and no one would question their place now. But Goel's gripe isn't against AI necessarily — more so that it provides too tempting of a crutch.

Like any vocation, part of mastering it involves struggling at it first, and having the courage to ask the old masters questions. In Goel's heyday, the place to do just that was StackOverflow. The forum's still popular, but in the post-ChatGPT age, more and more coders are turning to large language models for answers instead. 

"Junior devs these days have it easy. They just go to chat.com and copy-paste whatever errors they see," Goel wrote.

But if AI just gives the right answer, it isn't forcing newcomers to synthesize different possibilities and really grapple with thinking through the problem.

"With StackOverflow, you had to read multiple expert discussions to get the full picture," opined Goel. "It was slower, but you came out understanding not just what worked, but why it worked." 

It's sound logic. And some research may back up the sentiment. A recent study conducted by researchers at Microsoft and Carnegie Mellon suggested that the more people used AI — and as they placed increased trust in its answers — the more their critical thinking skills atrophied, like a muscle that doesn't get much use. 

There are some caveats to that study, like that it hinges on self-reported data from participants about their perceived effort as an indicator of critical thinking, but the idea of cognitive offloading isn't a huge stretch.

Plus, there's the fact that the programming ability of many of these AI models can be pretty dubious at times, as they're all prone to hallucinating. And while they may speed up your workflow, the tradeoff is, as some evidence shows, that the tech ends up inserting far more errors into your code.

Not that we can put the genie back in the bottle. Goel argues that the "future isn't about where we use AI — it's about how we use it." But right now, "we're trading deep understanding for quick fixes," he says. "We're going to pay for this later."

More on AI: Large Law Firm Sends Panicked Email as It Realizes Its Attorneys Have Been Using AI to Prepare Court Documents


Share This Article