deleted by creator
deleted by creator
Humans are really bad at determining whether a chat is with a human or a bot
Eliza is not indistinguishable from a human at 22%.
Passing the Turing test stood largely out of reach for 70 years precisely because Humans are pretty good at spotting counterfeit humans.
This is a monumental achievement.
As long as no one messes with their open source contributions… (ditto for MS)
To the one person who upvoted this: We should be friends.
Aye, I’d wager Claude would be closer to 58-60. And with the model probing Anthropic’s publishing, we could get to like ~63% on average in the next couple years? Those last few % will be difficult for an indeterminate amount of time, I imagine. But who knows. We’ve already blown by a ton of “limitations” that I thought I might not live long enough to see.
Participants only said other humans were human 67% of the time.
On the other hand, the human participant scored 67 percent, while GPT-3.5 scored 50 percent, and ELIZA, which was pre-programmed with responses and didn’t have an LLM to power it, was judged to be human just 22 percent of the time.
54% - 67% is the current gap, not 54 to 100.
Sounds like he needs someone with training to help him through retraining his behavioral/thought patterns, something a functional social system would provide if those were as common as comment culture.
Thank you, I seldom see my own thoughts laid out so clearly. As a practitioner of the Dark Arts (marketing), this union of commerce and art is a foul bargain. I think it’s time the two had some time apart to work on themselves.
You’ve reminded me that I’ve been meaning to look more seriously at an Ultracortex.
It seems to me that we’ve reached a crossroads. I’ve been very aware of the data mining, garden walls, data trading, privacy violations, security issues, ownership issues, etc. - for roughly 30 years. I regularly make the choice to be exploited for the benefits I extract, largely because the data they’ve gotten from me thus far I don’t highly value. But the necessity to develop strategies to keep the devil’s bargain beneficial has reached a fevered pitch. I want to train my own AI and public AIs. I want to explore the vast higher dimensional semantic spaces of generative models without API charges. APIs are vanishing as we speak, anyway, companies fearful of their data being extracted without compensation. Can’t really sit on the Open/Closed fence anymore.
As a Taoist, I’m mildly offended.
I literally read it to mean they’re starting to run out of women and children to kill.
Until they can distribute the training load of large models to consumer graphics cards (and do something like SETI@Home) it does seem like the benefit of distributed training isn’t enough to overcome the friction.
Like a decade ago?
The papers have a ton of practical info about feasibility, implementation, etc.
I do think Perplexity does a better job. Since it cites sources in its generated response, you can easily check its answer. As to the general public trusting Google, the company’s fall from grace began in 2017, when the EU fined them like 2 billion for fixing search results. There’ve been a steady stream of controversies since then, including the revelation that Chrome continues to track you in private mode. YouTube’s predatory practices are relatively well-known. I guess I’m saying that if this is what finally makes people give up on them, no skin off my back. But I’m disappointed by how much their mismanagement seems to be adding to the pile of negativity surrounding AI.
WebP is a raster graphics file format developed by Google intended as a replacement for JPEG, PNG, and GIF file formats. It supports both lossy and lossless compression, as well as animation and alpha transparency. Google announced the WebP format in September 2010, and released the first stable version of its supporting library in April 2018.
The format has spotty support across applications and some vulnerabilities were discovered that required patch efforts last year. It’s not clear why you should do anything.