

A couple of things:
- we are talking about chat bots talking to people in this post, and how you can steer the simulated conversation towards whatever you want
- it did not debug anything, a human debugged something and wrote about it. Then that human input and a ton of others were mapped into a huge probability map, and some computer simulated what people talking about this would most likely say. Is it useful? Sure, maybe. Why didn’t you debug it yourself?
Ok, that’s easy. If I make an LRM model of your dead grandma, is that your grandma? Why not? What’s different?
Your bug fixing teams are begging for automation. That tells me you have an unsustainable setup. You are providing a bug fix suggestion tool, I don’t see how that fixes your problem. Seems like you need better coding practices and possibly more people.