Which still sounds like user error on the part of the students.
One thing I find it great for is documentation parsing. If I'm working with a code library with dense, obtuse documentation, it's much easier to just explain my code issue to the chat and have it do all that parsing for me. I've found it to produce acceptable solutions, and when it makes mistakes I just point them out and it corrects itself. Nothing I wouldn't expect from a human assistant.
Why do we need something to be infallible before we declare it intelligent?