Quote Originally Posted by Tanarii View Post
Absolutely. It's being roundly rejected by IT and Educational professionals due to the errors it comes up with from not understanding the meaning of the words it is stringing together, which cause hilarious errors. Hilarious ... as long as they're not being used for anything serious. Students have already been expelled for trying to use it to write papers.
Which still sounds like user error on the part of the students.

One thing I find it great for is documentation parsing. If I'm working with a code library with dense, obtuse documentation, it's much easier to just explain my code issue to the chat and have it do all that parsing for me. I've found it to produce acceptable solutions, and when it makes mistakes I just point them out and it corrects itself. Nothing I wouldn't expect from a human assistant.

Why do we need something to be infallible before we declare it intelligent?