I often see people complaining about AIs doing weird things when they're using them to help develop software. Sometimes they do amazing things, but as our projects and software gets more complex they start to get things wrong far more often.
Some of this is survivorship bias - we forget the times early on where the AI did weird things, but remember the times it didn't. Some of this is because our mental model is to think of large language models as being like people, and that's the wrong way to look at things for now.
The memory problem
When we first get together with people to talk about new ideas we get a torrent of thoughts. Some are great, some less so, but over time we get to a shared understanding. The problem with AIs is they don't learn from these discussions the same way as people do. They don't remember things seconds after they helped us with them. The longer we continue, the more frustrating this can get.
The process of building is one of constantly making decisions. Some decisions are small, and some are large, but each one takes us in the direction of what we want. Sometimes we change our minds and make new decisions. Unfortunately, our AIs don't remember this, or if they're trying to remember based on what we said to them in the past, they're not aware that we made a new decision to do something differently.
The solution: explicit context
AIs can't read minds. They may know 100 ways to help you, so you need to remind them of what you're trying to do, how you want it done, and what decisions have been made on how to proceed.
If you start each new interaction by telling them all those things they'll narrow things down to what you wanted and your AI assistant will continue to amaze you.

And yes, we use this approach for everything we do. It's why we created the Metaphor prompt creation language. It's not the only way to get great results but it's one we and our partners have seen work time and again.
Find out more about Metaphor.