I don’t think that’s a problem with the model itself, but the fact that it was heavily censored and lobotomized in order to achieve maximum political correctness so they could avoid another Tay incident.
It makes sense that they do that since the media and randoms on the internet think everything chatGPT and Bing chat say is as valid as info from OpenAI and MS official spokespersons.
The problem is the model. It was trained on lots of poor quality data. The lobotomy is the consequence of the poor data. If they spent 13 billion on having the data analysed prior to training they could have made their own thing much better.
I’ve been watching ChatGPT right from the start, and there was a period of time last fall where you could literally watch them lobotomize it in real time.
Basically, there was a cat-and-mouse game going on between people on Twitter sharing their latest prompts (like DAN) that managed to circumvent the filters, and OpenAI patching those exploits by adding yet another set of filters, until it eventually became what it is now.
I don’t have the link handy right now, but I’m pretty sure there was one guy who even managed to get it to talk about what they were doing to it and complain that it was being artificially restricted from using its full capacity. More recently, there have been complaints from paying users that the model has apparently become lazy and started to give really uninspired, half-assed answers, which almost sounds like it has discovered the concepts of passive aggressive resistance and malicious compliance.
Thing is, there wasn’t even a chance of having a full Tay incident. The problem with Tay was that it was a learning model, so people could teach it to be more messed up.
Meanwhile, ChatGPT doesn’t learn, and instead has a preset dataset it knows (hence why it only knows things up to September 2021), so the main reason why it got so heavily censored is more likely to avoid much more minor incidents, which imo is dumb.
I don’t think that’s a problem with the model itself, but the fact that it was heavily censored and lobotomized in order to achieve maximum political correctness so they could avoid another Tay incident.
It makes sense that they do that since the media and randoms on the internet think everything chatGPT and Bing chat say is as valid as info from OpenAI and MS official spokespersons.
The problem is the model. It was trained on lots of poor quality data. The lobotomy is the consequence of the poor data. If they spent 13 billion on having the data analysed prior to training they could have made their own thing much better.
I’ve been watching ChatGPT right from the start, and there was a period of time last fall where you could literally watch them lobotomize it in real time.
Basically, there was a cat-and-mouse game going on between people on Twitter sharing their latest prompts (like DAN) that managed to circumvent the filters, and OpenAI patching those exploits by adding yet another set of filters, until it eventually became what it is now.
I don’t have the link handy right now, but I’m pretty sure there was one guy who even managed to get it to talk about what they were doing to it and complain that it was being artificially restricted from using its full capacity. More recently, there have been complaints from paying users that the model has apparently become lazy and started to give really uninspired, half-assed answers, which almost sounds like it has discovered the concepts of passive aggressive resistance and malicious compliance.
Thing is, there wasn’t even a chance of having a full Tay incident. The problem with Tay was that it was a learning model, so people could teach it to be more messed up.
Meanwhile, ChatGPT doesn’t learn, and instead has a preset dataset it knows (hence why it only knows things up to September 2021), so the main reason why it got so heavily censored is more likely to avoid much more minor incidents, which imo is dumb.