• 0 Posts
  • 33 Comments
Joined 2 months ago
cake
Cake day: June 28th, 2025

help-circle
  • I’ve used AI by just pasting code, then asking if there’s anything wrong with it. It would find things wrong with it, but would also say some things were wrong when it was actually fine.

    I’ve used it in an agentic-AI (Cursor), and it’s not good at debugging any slightly-complex code. It would often get “stuck” on errors that were obvious to me, but making wrong, sometimes nonsensical changes.












  • I keep hearing stuff like this, but I haven’t found a good use or workflow for AI (other than occasional chatbot sessions). Regular autocomplete is more accurate (no hallucinations) and faster than AI suggestions (especially accounting for needing to constantly review the suggestions for correctness). I guess stuff like Cursor is OK at making one-off tools on very small code-bases, but hits a brick-wall when the code base gets too big. Then you’re left with a bunch of unmaintainable code you’re not very familiar with and you would to spend a lot of time trying to fix yourself. Dunno if I’m doing something wrong or what.

    I guess what I’m saying is that using AI can speed you up to a point while the project accumulates massive amounts of technical debt, and when you take into account all the refactoring and debugging time, it results in taking longer to produce a buggier project. At least, in my experience.




  • I’ve tried Copilot for a while and played around with Cursor for a bit. I was better and faster without Copilot due to sometimes not paying enough attention of the lines it would generate. This would cause subtle bugs that took a long time to debug. Cursor just produced unmaintainable code-bases that I had no knowledge of, and to make major changes, would be faster for me to just rewrite it from scratch. The act of typing gives me time to think more about what I’m doing or am going to do, while Copilot generations are distracting and break my thought processes. I work best with good LSP tooling and sometimes AI chatbots (mostly just for customized example snippets for libraries or frameworks I’m unfamiliar with; though that has its own problems because the LLMs knowledge is out of date a lot) that don’t directly modify my code.


  • People have different levels of “nerves” as others, and it kind of sounds like you may filtering out applicants on an arbitrary metric (how nervous a person may be in an interview). Don’t have enough information about your process to say for sure (obviously), but it may be something to think about. Interviews can be very high-stakes for some people (such as “I may become homeless”), and not for others (“my parents are rich”). After hired, it’s not necessarily as high-staked, and toy problems aren’t what SEs work on day-to-day.