The tech giant is evaluating tools that would use artificial intelligence to perform tasks that some of its researchers have said should be avoided.

Google’s A.I. safety experts had said in December that users could experience “diminished health and well-being” and a “loss of agency” if they took life advice from A.I. They had added that some users who grew too dependent on the technology could think it was sentient. And in March, when Google launched Bard, it said the chatbot was barred from giving medical, financial or legal advice. Bard shares mental health resources with users who say they are experiencing mental distress.

  • ExLisper@linux.community
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 months ago

    Of course I’m just paying ‘what if’ but I really can see this happen. Imagine: you get out of work, get into an autonomous car and Alexa tells “I’m taking you to your new apartment. I arranged for your things to be moved there today and updated your home address data in bank, amazon and municipal registry. According to my analysis your commute time will be 10% shorter, you will save $100 per month on average and the style matches your preferences 5% better. Overall you will be 12% happier there.” You get there and you actually do like it and you are 12% happier. Most people would just go with it. We would be the rebels hiding in forests to avoid the algorithm and live our own lives. Sometimes we would be 12% less happy then the human-robots but at least we would think for ourselves…