For a long time I’ve thought it would be cool to upload my consciousness into a machine and be able to talk to a version of myself that didn’t have emotions and cravings.

It might tell me that being around my parents has consistently had a negative effect on my mood for years now, even if I don’t see it. Or that I don’t really love X, I just like having sex with her. Maybe it could determine that Y makes me uncomfortable, but has had an overall positive effect on my life. It could mirror myself back to me in a highly objective way.

Of course this is still science fiction, but @TheOtherJake@beehaw.org has pointed out to me that it’s now just a little bit closer to being a reality.

With Private GPT, I could set up my own localized AI.

https://generativeai.pub/how-to-setup-and-run-privategpt-a-step-by-step-guide-ab6a1544803e

https://github.com/imartinez/privateGPT

I could feed this AI with information that I wasn’t comfortable showing to anyone else. I’ve been keeping diaries for most of my adult life. Once PrivateGPT was trained on the basic language model, I could feed it my diaries, and then have a chat with myself.

I realize PrivateGPT is not sentient, but this is still exciting, and my mind is kinda blown right now.

Edit 1: Guys, this isn’t about me creating a therapist-in-a-box to solve any particular emotional problem. It’s just an interesting idea about using a pattern recognition tool on myself, and have it create summaries of things I’ve said. Lighten up.

Edit 2: It was anticlimactic. This thing basically spits out word salad no matter what I ask it, even if the question has a correct answer, like a specific date.

  • Lumidaub@feddit.de
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    The AI would still need to understand feelings, at least in principle, in order to interpret your actions which are based on feelings. Even “I like having sex with her” is a feeling. A purely rational mind would probably reprimand you for using contraception because what is the point of sex if not making offspring?

    • The Baldness@beehaw.orgOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I would think that “I like having sex with her” would be objectively quantifiable based on how many times it was mentioned versus other mentions of the person in question.

      • Lumidaub@feddit.de
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        At that point you could search your diary entries yourself to analyse the way you talk about her. Assuming of course you’re honest with your diary and yourself and not glossing over things you don’t want to realise - in which case do you really need an AI to tell you?

        • The Baldness@beehaw.orgOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Those were just generic examples. More specifically, I tend to write in my journal when I have a problem I’m trying to work out, not when things are moving along smoothly. So I would expect the chatbot to be heavily biased that way. It would still be good for recognizing patterns, assigning them a weight, and giving me responses based on that data. At least that’s my understanding of how a GPT works.

          • Lumidaub@feddit.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Yeh, I get that it’s just an example. But wouldn’t it be like that for anything you could ask it? It can only work with what you’re giving it and that data could be heavily influenced by you not wanting to see something. Or exaggerating. Or forgetting. A human looking at your diaries might be able to put themselves in your situation and understand, based on their own experience with the human condition, how you were probably feeling in a situation the diary entry is describing and interpret the entry accordingly, maybe even especially when considering other, seemingly conflicting entries. But they’re using “outside” information which an AI doesn’t have.

            Don’t get me wrong, I’m not saying what you’re imagining is completely impossible - I’m trying to imagine how it might work and why it might not. Maybe one way to develop such an AI would be to feed it diaries of historical people whose entries we can interpret with decent confidence in hindsight (surely those must exist?). Ask the AI to create a characterisation of the person and see how well it matches the views of human historians.

            I am so very much not an expert on AI and I hate most of what has come of the recent surge. And remember that we’re not talking about actual intelligence, these are just very advanced text parsing and phrase correlating machines. But it does of course seem tempting to ask a machine with no secondary motives to just fucking tell me the harsh truth so now I’m thinking about it too.