NHacker Next
login
▲Pitfalls of premature closure with LLM assisted codingshayon.dev
31 points by shayonj 3 days ago | 7 comments
Loading comments...
jbellis 43 minutes ago [-]
I think this is correct, and I also think it holds for reviewing human-authored code: it's hard to do the job well without first having your own idea in your head of what the correct solution looks like [even if that idea is itself flawed].
danielbln 4 hours ago [-]
I put the examples he gave into Claude 4(Sonnet) purely asking to eval the code, it pointed out every single issue about the code snippets (N+1 Query, race condition, memory leak). The article doesn;t mention which model was used, or how exactly it was used, or in which environment/IDE it was used.

The rest of the advice in there is sound, but without more specifics I don't know how actionable the section "The spectrum of AI-appropriate tasks" really is.

metalrain 23 minutes ago [-]
It's not about "model quality". Most models can improve code their output when asked, but problem is the lack of introspection by the user.

Basically same problem as copy paste coding, but LLM can (sometimes) know your exact variable names, types so it's easier to forget that you need to understand and check the code.

shayonj 3 hours ago [-]
My experience hasn't changed between models, given the core issue mentioned in the article. Primarily I have used Gemini and Claude 3.x and 4. Some GPT 4.1 here and there.

All via Cursor, some internal tools and Tines Workbench

suddenlybananas 4 hours ago [-]
I initially thought that layout of the sections was an odd and terrible poem.
tempodox 2 minutes ago [-]
Now that you mention it, me too.
shayonj 3 hours ago [-]
haha! I didn't see it that way originally. Shall take it as a compliment and rework that ToC UI a bit :D.
llmenth 2 hours ago [-]
[dead]