For a long time I’ve thought it would be cool to upload my consciousness into a machine and be able to talk to a version of myself that didn’t have emotions and cravings.

It might tell me that being around my parents has consistently had a negative effect on my mood for years now, even if I don’t see it. Or that I don’t really love X, I just like having sex with her. Maybe it could determine that Y makes me uncomfortable, but has had an overall positive effect on my life. It could mirror myself back to me in a highly objective way.

Of course this is still science fiction, but @TheOtherJake@beehaw.org has pointed out to me that it’s now just a little bit closer to being a reality.

With Private GPT, I could set up my own localized AI.

https://generativeai.pub/how-to-setup-and-run-privategpt-a-step-by-step-guide-ab6a1544803e

https://github.com/imartinez/privateGPT

I could feed this AI with information that I wasn’t comfortable showing to anyone else. I’ve been keeping diaries for most of my adult life. Once PrivateGPT was trained on the basic language model, I could feed it my diaries, and then have a chat with myself.

I realize PrivateGPT is not sentient, but this is still exciting, and my mind is kinda blown right now.

Edit 1: Guys, this isn’t about me creating a therapist-in-a-box to solve any particular emotional problem. It’s just an interesting idea about using a pattern recognition tool on myself, and have it create summaries of things I’ve said. Lighten up.

Edit 2: It was anticlimactic. This thing basically spits out word salad no matter what I ask it, even if the question has a correct answer, like a specific date.

  • Version@feddit.de
    link
    fedilink
    arrow-up
    57
    ·
    1 year ago

    Mate, maybe you should just go a therapist. That‘s their job, you don‘t need an AI for this.

  • glad_cat@lemmy.sdf.org
    link
    fedilink
    arrow-up
    25
    ·
    1 year ago

    It might tell me that

    IMHO an AI won’t be able to fix or cure all those feelings. You should see a therapist for this.

    “I like having sex with her” would be objectively quantifiable

    Again, I don’t think feelings are quantifiable, this is the main problem with AI.

  • ConsciousCode@beehaw.org
    link
    fedilink
    arrow-up
    12
    ·
    1 year ago

    Chat GPT can already be a pretty good tool for self-reflection. The way its model works, it tends to reflect you more than anything else, so it can be used as a reasonably effective “rubber duck” that can actually talk back. I wouldn’t recommend it as a general therapeutic tool though, it’s extremely difficult to get it to take initiative so the entire process has to be driven by you and your own motivation.

    Also… Have you ever watched Black Mirror? This is pretty much the episode Be Right Back, it doesn’t end well.

    • renard_roux@beehaw.org
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      It doesn’t end well.

      Certainly true for the majority of Black Mirror episodes 😅

      And the show is just phenomenal. I’m recent years, I can’t think of any other show where I’m just in near constant awe of the writers, apart from Bluey. Watching either, my wife and I will often turn to the other at the end of an episode and go: “It’s just so fucking good”.

  • ASK_ME_ABOUT_LOOM@beehaw.org
    link
    fedilink
    arrow-up
    12
    ·
    edit-2
    1 year ago

    The short story in the form of a wiki entry MMAcevedo seems apropos to this conversation, especially the fictional uploader’s opinions on it:

    Acevedo indicated that being uploaded had been the greatest mistake of his life

  • jecxjo@midwest.social
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    Unfortunately this setup will only get you to a very rudimentary match to your writing style and only copying from text you’ve already written. New subjects or topics you did not feed it won’t show up. What you’d get is a machine that would be a caricature of you. A mimic.

    Its not until the AI can actually identify the topics you prompt, make decisions based on what views and how they relate to the topic that you’ll have an interesting copy of yourself. For example if you were to ask it for something new you should cook today PrivateGPT would only list things you current stated you liked. It would not be able to know the style of food, the flavors and then make a guess as to something else that fits that same taste.

    • ivanafterall@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      Yeah, so the AI would STILL be very favorable about having sex with X, for example, because it’s trained on your writing/speaking/whatever.

      “What do I feel about this?”

      “Well, an average of what you’ve always felt about it, roughly…”

      • jecxjo@midwest.social
        link
        fedilink
        arrow-up
        5
        ·
        1 year ago

        Well sort of. If you never talked about dating for instance, and you then started taking to the AI about dating it may not put two and two together to get that it relates to sex. It wouldn’t be able to infer anything about the topic as it only knows what the statistically most likely next word is.

        That’s what i feel like most people don’t get. Even uploading years and years of your own text will only match your writing style and the very specific things you’ve said about specific topics. That why the writers strike is kind of dumb. This form of AI wont invent new stories, just rehash old ones.

        …oh…now I see why they are on strike.

  • kaitwalla@beehaw.org
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    I think there’s an (understandable) urge from the technically minded to strive for rationality not only above all, but to the exclusion of all else. There is nothing objectively better about strict objectivity without relying on circular logic (or, indeed, arguing that subjective happiness is perfectable through objectivity)

    I am by no means saying that you should not pursue your desire, but I would like to suggest that removing a fundamental human facet like emotions isn’t necessarily the utopian outlook you might think it.

  • PenguinTD@lemmy.ca
    link
    fedilink
    arrow-up
    7
    ·
    1 year ago

    A regular adult human have 600 trillion synapses( connections between neurons ), so to just record index of these edges needs like 4.3 PB(yep petabytes), it’s not even counting what they do, just the index(cause 32bit int is not enough.) And, just in case you don’t know, toddler have even higher connection count for faster learning until our brain decides that “oh, these connection are not really needed” then disconnect and then save energy consumption. It is really not in our reach yet to simulate a self-aware artificial creature, cause most animals we know that are self-aware have high counts of synapses.

    And yes we are attempting those for various reason.

    https://www.humanbrainproject.eu/en/brain-simulation/

  • Lumidaub@feddit.de
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    The AI would still need to understand feelings, at least in principle, in order to interpret your actions which are based on feelings. Even “I like having sex with her” is a feeling. A purely rational mind would probably reprimand you for using contraception because what is the point of sex if not making offspring?

    • The Baldness@beehaw.orgOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I would think that “I like having sex with her” would be objectively quantifiable based on how many times it was mentioned versus other mentions of the person in question.

      • Lumidaub@feddit.de
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        At that point you could search your diary entries yourself to analyse the way you talk about her. Assuming of course you’re honest with your diary and yourself and not glossing over things you don’t want to realise - in which case do you really need an AI to tell you?

        • The Baldness@beehaw.orgOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Those were just generic examples. More specifically, I tend to write in my journal when I have a problem I’m trying to work out, not when things are moving along smoothly. So I would expect the chatbot to be heavily biased that way. It would still be good for recognizing patterns, assigning them a weight, and giving me responses based on that data. At least that’s my understanding of how a GPT works.

          • Lumidaub@feddit.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Yeh, I get that it’s just an example. But wouldn’t it be like that for anything you could ask it? It can only work with what you’re giving it and that data could be heavily influenced by you not wanting to see something. Or exaggerating. Or forgetting. A human looking at your diaries might be able to put themselves in your situation and understand, based on their own experience with the human condition, how you were probably feeling in a situation the diary entry is describing and interpret the entry accordingly, maybe even especially when considering other, seemingly conflicting entries. But they’re using “outside” information which an AI doesn’t have.

            Don’t get me wrong, I’m not saying what you’re imagining is completely impossible - I’m trying to imagine how it might work and why it might not. Maybe one way to develop such an AI would be to feed it diaries of historical people whose entries we can interpret with decent confidence in hindsight (surely those must exist?). Ask the AI to create a characterisation of the person and see how well it matches the views of human historians.

            I am so very much not an expert on AI and I hate most of what has come of the recent surge. And remember that we’re not talking about actual intelligence, these are just very advanced text parsing and phrase correlating machines. But it does of course seem tempting to ask a machine with no secondary motives to just fucking tell me the harsh truth so now I’m thinking about it too.

  • Sanctus@kbin.social
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    LLMs don’t think. They return an output based on the input you gave them like an extremely complex switch or if-else statement (sort of). We’re a long way off from truly digitizing ourselves even in carbon copy manner like this.

  • 𝑔𝑎𝑙𝑎𝑥𝑖@lemm.ee
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    I’ve found that learning about and practicing DBT has offered me more of a skill to do this myself. I know what you mean about wishing you could see outside the frame of your emotions and past. In DBT, we have something called the “emotion mind” and the “reasonable mind.” But we need both in order to make decisions. Rationality is great, but emotion provides direction, desire, goals, and a “why” for everything we do. The idea is that when you use emotion and reason together, you can use your “wise mind” which can help you see outside your experiences and gain perspective in new areas. I think I know what you mean because I also crave further neutral 3rd party understanding on my past too, and use ChatGPT a lot for that myself. Thought I would just throw in a couple more cents if you hadn’t heard of the concept. :)

  • kaitwalla@beehaw.org
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    I think there’s an (understandable) urge from the technically minded to strive for rationality not only above all, but to the exclusion of all else. There is nothing objectively better about strict objectivity without relying on circular logic (or, indeed, arguing that subjective happiness is perfectable through objectivity)

    I am by no means saying that you should not pursue your desire, but I would like to suggest that removing a fundamental human facet like emotions isn’t necessarily the utopian outlook you might think it.

  • Dankenstein@beehaw.org
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    This sounds really fuckin cool.

    Probably wouldn’t be able to replicate your beliefs/morals/mannerisms consistently enough for you to not question what it says and/or actually predict your complete internal through processes.

    That being said, if you were to train the AI to be a kind of automated therapist as well as train it to speak like you, it could be useful for getting some thoughts unstuck if you’re in a rut, not sure if that’s something that is possible yet or not.

    A real therapist might be better though.