These comments may seem silly at a glance, like joking about using WebMD instead of going to the doctor. But if this is the kind of unhealthy over-reliance on AI that OpenAI considers “cool,” we should all be worried.
These comments may seem silly at a glance, like joking about using WebMD instead of going to the doctor. But if this is the kind of unhealthy over-reliance on AI that OpenAI considers “cool,” we should all be worried.
Does it not just provide a garbled mess that doesn’t even function every time you try though? Every time I’ve tried to generate code with it, it has never worked, and fixing the errors is more time consuming than just writing the code myself.
Edit: Now why on earth did it do that? It should’ve just quoted.
Nope. It works very fine most of the time. You need to be clear what you ask for and not ask for too much at a time. You say : do a script that does a small part, test it then add a new part and so on.
I mostly do this while doing another task, using my free time during compile and server deploy time to advance on my next stuff.
You can’t just ask him to do full feature and hope it works properly, that’s a recipe for disaster.
You have to think like it like working with a junior dev.
Fuck chatGPT. If you have the hardware for it, you can locally run AI models on your device like Mistral or Qwen which are open source. Mistral is under apache license.
Most resources on this are on reddit in r/LocalLLaMA
It’s wonderful. I can iterate on menial bullshit in code like manifest lists completely offline without data leaks.
it’s better if you have it working iteratively. The newer command line tools allow it to compile the code and fix errors itself. There are also some techniques you can use so it does a better job of keeping track of what it’s doing. Generally they still suck at large existing codebases but they’re pretty good if you are doing something greenfield like a prototype or using the existing project as a dependency rather than trying to modify the existing project.
I’ll have to have a look into that. If nothing else, it has been good at collecting my thoughts and giving me a clear idea of how not to do things.
It has some good suggestions at time also. Don’t just discount it “because”.
You need to use your critical thinking skill with this as much as everywhere else.
So I’ve tried it, don’t think it is worthwhile, and don’t think it is ultimately a good use of humanity’s resources, and you’re telling me that I’m not “critically thinking” because I’m not a hardcore true believer?
Jesus, what the fuck happened to Lemmygrad since I left? I knew it was pushing in the direction of AI stuff, but I didn’t think it would get so full of smug techbro bullshit so soon.
the thing i’ve found that works best is telling it to keep a scratchpad of what it’s doing and what it’s learned and a few other things. You have to tell it explicitly it has to keep it updated otherwise it’ll neglect the scratchpad and it’ll be a disaster when you go to recover from it forgetting everything.
You can ask it to make a memory dump in text form also.
Chatgtp also has a memory feature… It only gets full easily!