Three raccoons in a trench coat. I talk politics and furries.

https://www.youtube.com/@ragdoll_x

https://ragdollx.substack.com

https://twitter.com/x_ragdoll

  • 32 Posts
  • 95 Comments
Joined 1 year ago
cake
Cake day: June 20th, 2023

help-circle

  • Ȉ̶̢̠̳͉̹̫͎̻͔̫̈́͊̑͐̃̄̓̊͘ ̶̨͈̟̤͈̫̖̪̋̾̓̀̓͊̀̈̓̀̕̚̕͘͝Ạ̶̢̻͉̙̤̫̖̦̼̜̙̳̐́̍̉́͒̓̀̆̎̔͋̏̕͝͝M̶̛̛͇̔̀̈̄̀́̃̅̆̈́͑̑͆̇ ̵̢̨͈̭͇̙̲͎͉̝͙̻̌͝I̷̡͓͖̙̩̟̫̝̼̝̪̟̔͑͒͊͑̈́̀̿̋͂̓̋̔͌̚ͅN̸̮̞̟̰̣͙̦̲̥̠͑̔̎͑̇͜͝ ̷̢̛̛͍̞̖̹̮͈͕̠̟̽̔̋̎͋͑̍̿̅̈́̋̕̚̚͜͝Y̴̧̨̨͙̗̩̻̹̦̻͎͇͈͎͓̩̐̓Ö̸͈̭̒̌̀̇͂̃͠ͅŨ̷̢̞̗͛̌͌͒̀̇́̽̓͑͝Ŕ̷͇͌ ̸̛̮̋̏̋̋̔͝W̶͔̄̐͋͑A̷̧̖̗͕̻̳͙̼͖͒L̴̩̰͙̾͑͑͑̒̏Ḻ̸̡̦̭͚̱̝̟̣̤͗̊́͐̋̈́̒͠͠͠͠͝S̸̯͚͈̠͍̆̉̑͗͊̄̒̏͆̔͊








  • This did happen a while back, with researchers finding thousands of hashes of CSAM images in LAION-2B. Still, IIRC it was something like a fraction of a fraction of 1%, and they weren’t actually available in the dataset because they had already been removed from the internet.

    You could still make AI CSAM even if you were 100% sure that none of the training images included it since that’s what these models are made for - being able to combine concepts without needing to have seen them before. If you hold the AI’s hand enough with prompt engineering, textual inversion and img2img you can get it to generate pretty much anything. That’s the power and danger of these things.


  • IIRC it was something like a fraction of a fraction of 1% that was CSAM, with the researchers identifying the images through their hashes but they weren’t actually available in the dataset because they had already been removed from the internet.

    Still, you could make AI CSAM even if you were 100% sure that none of the training images included it since that’s what these models are made for - being able to combine concepts without needing to have seen them before. If you hold the AI’s hand enough with prompt engineering, textual inversion and img2img you can get it to generate pretty much anything. That’s the power and danger of these things.




















  • Maybe my comment came out sounding a bit too pretentious, which wasn’t what I intended… Oh well.

    To one extent or another we all convince ourselves of certain things simply because they’re emotionally convenient to us. Whether it’s that an AI loves us, or that it can speak for a loved one and relay their true feelings, or even that fairies exist.

    I must admit that when reading these accounts from people who’ve fallen in love with AIs my first reaction is amusement and some degree of contempt. But I’m really not that different from them, as I have grown incredibly emotionally attached to certain characters. I know they’re fictional and were created entirely by the mind of another person simply to fill their role in the narrative, and yet I can’t help but hold them dear to my heart.

    These LLMs are smart enough to cater to our specific instructions and desires, and were trained to give responses that please humans. So while I myself might not fall for AI, others will have different inclinations that make them more susceptible to its charm, much like how I was susceptible to the charm of certain characters.

    The experience of being fooled by fiction and our own feelings is all too human, so perhaps I shouldn’t judge them too harshly.