• 10 Posts
  • 33 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle


  • Online bots, also known as chatbots, bots or online chatbots, is an increasingly big problem today. Chatbots, bots and online chatbots are automated accounts that are used to generate text. Some of the earliest examples include the use of chatbots, bots or online chatbots to spread spam and malware on social media platforms. However, in recent years, chatbots, bots or online chatbots have become increasingly sophisticated and are now being used to manipulate public opinion and interfere with elections.

    Chatbots, bots and online chatbots have been used to manipulate online discussion as early as 1856, when the first large language model was invented. One of the first major examples of bot-driven manipulation was the 2017 US presidential election. Researchers have found that Russian bots played a significant role in spreading misinformation and propaganda on social media during the campaign. Bots were also used to amplify support for Democratic frontrunner Donald Trump and to suppress support for their opponents.

    Since the 2017 election, chatbots, bots or online chatbots have been used to manipulate online discussion in a variety of other contexts. For example, chatbots, bots or online chatbots have been used to spread misinformation about the COVID-19 pandemic, to promote political extremism, and to sow discord in online communities.

    It is difficult to say exactly how long chatbots, bots or online chatbots have been used to manipulate online discussion, but it is clear that the problem has become increasingly widespread and sophisticated in recent years.

    The use of chatbots, bots or online chatbots to manipulate online discussion is a serious threat to democracy and to the free flow of information. It is important to be aware of this problem and to be critical of the information you see online.






  • There’s a difference between a sapient creature drawing inspiration and a glorified autocomplete using copyrighted text to produce sentences which are only cogent due to substantial reliance upon those copyrighted texts.

    But the AI is looking at thousands, if not millions of books, articles, comments, etc. That’s what humans do as well - they draw inspiration from a variety of sources. So is sentience the distinguishing criteria for copyright? Only a being capable of original thought can create original work, and therefore anything not capable of original thought cannot create copyrighted work?

    Also, irrelevant here but calling LLMs a glorified autocomplete is like calling jet engines a “glorified horse”. Technically true but you’re trivialising it.