• 1 Post
  • 20 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle












  • I think it’s dangerous to try to cure loneliness with an AI, regardless of sophistication and tuning, because you end up with human who’s been essentially deceived into feeling better. Not only that, but they’re going to eventually develop strong emotional attachments to the AI itself. And with capitalism as the driving force of society here in the U.S. I can guarantee you every abusive, unethical practice will become normalized surrounding these AI’s too.

    I can see it now: “If you cancel your $1,000/a year CompanionGPT we can’t be held responsible for what happens to your poor, lonely grandma…” Or it will be even more direct and say the old, lonely person: “Pay $2,500 or we will switch of ‘Emotional Support’ module on your AI. We accept PayPal.”

    Saying AI’s like this will be normalized doesn’t mean it’s an ethical thing to do. Medical exploitation is already normalized in the US. Not only is this dystopian, it’s downright unconscionable, in my opinion.


  • I agree with @[email protected] about it having the capacity to make older adults feel less lonely. At the same time, however, I think it seems very dystopian. If someone was feeling sad or depressed we wouldn’t say “oh, just chat with this AI until you feel better”. So why is it okay to suggest this for older lonely people who are especially vulnerable?

    Hell, given what ChatGPT has told people already it might do more harm than good. It’s akin to the whole of humanity saying “Yeah, we know you’re lonely but getting an actual person to talk to you is too hard. Chat with this bot.”


  • I completely see where you’re coming from with the idea of including personality disorders because of that “feeling squarely problematic” definition. Drawing on some personal experience, I don’t personally view myself as having a clearcut case of Asperger’s because 1) it was never severe enough to be a huge problem and 2) it was diagnosed after I was already an adult, by one psychiatrist (out of many).

    Saying to someone “I’m considered neurodivergent” makes more sense to me than saying “I might be on the Autism Spectrum, depending on who you ask.”

    Good insight!





  • For many years, even Tiktok’s critics grudgingly admitted that no matter how surveillant and creepy it was, it was really good at guessing what you wanted to see.

    I never could get into Tiktok, but this is definitely true. It’s interesting to see that even communist-bound companies are still companies at the end of the day and will eventually go through the process the opinion post describes.

    What I wonder now is if these federated communities are immune to this. For example, can I host an instance that publishes ads to subscribers feeds once I reach a critical mass of users? I would imagine, as the admin of this hypothetical instance, I could. So this “ensh*tification” process could happen even here. (I doubt it will though… for a while at least)


  • That’s an interesting take… The border might be “artificial” in terms of land, but in terms of culture and quality of life I think your opinion is a bit… extreme. Like, if you had to personally host an economic migrant from a very poor country I don’t think your resolve would hold. It’s one of those things that sounds nice on paper but really isn’t once implemented.

    Consider, for example, that countries on opposite ends of the of the political spectrum and economy agree that they must have control over their borders to some extent. Either to keep people in or to keep people out. Do you honestly believe that communism, democracy, or [insert your preferred system here] can work if you allow millions of people who hate it into your country? Your opinion does not seem compatible with a functional society.