There’s no way for teachers to figure out if students are using ChatGPT to cheat, OpenAI says in new back-to-school guide::AI detectors used by educators to detect use of ChatGPT don’t work, says OpenAI.
There’s no way for teachers to figure out if students are using ChatGPT to cheat, OpenAI says in new back-to-school guide::AI detectors used by educators to detect use of ChatGPT don’t work, says OpenAI.
Calling it cheating is the wrong way to think about it. If you had a TI 80 whatever in the early 90s, it was practically cheating when everyone else had crap for graphing calculators.
Cat GPT used effectively isn’t any different than a calculator or an electronic typewriter. It’s a tool. Use it well and you’ll do much better work
These hand wringing articles tell us more about the paucity of our approach to teaching and learning than they do about technology.
Do you understand what definitions are in place for authorship, citation, and plagiarism in regards to academic honesty policies?
The policies, and more importantly, the pedagogy are out of date and basically irrelevant in an age where machines can and do create better work than the majority of university students. Teachers used to ban certain levels of calculator from their classrooms because it was considered ‘cheating’ (they still might). Those teacher represent a backwards approach towards preparing students for a changing world.
The future isn’t writing essays independent of machine assistance just like the future of calculus isn’t slide rulers.
I think a big challenge or gap here is that writing has a correlation to vocabulary and developing the ability to articulate. It pays off not just for the prose that you write, but your ability to speak and discuss and present ideas. I agree that ai is a tool we will likely be using more in the future. But education is in place to develop skills and knowledge. Does ai help or hinder that goal if a teachers job includes evaluating how much a student has learned and whether they can articulate that?
I don’t fully agree with OP but I think we could probably do with adjusting some of them. Personally I think with current AI, if somebody composes something by making multiple AI prompts and selects the best result, they should get some kind of authorship because they used a tool to create something.
Meh. You’ll do better if you actually know some math as well. No engineer is going to pull up the calculator to calculate 127+9. I hang around math-wizards all day, and it’s me who need to use the calculator, not them. I’ll tell you that much.
Same goes for writing. Sure, ChatGPT can do amazing things. But if you can’t do them yourself, you’ll struggle to spot the not so amazing things it does.
It’s always easy when you know basic math, writing and reading to say schools are doing it all wrong. But you’re already mostly fluent in what they’re teaching. With that knowledge, you can use ChatGPT as a great tool. Without that knowledge, you couldn’t.