They are really very, very good at mirroring back to you what they see you expressing. This is done in a way that isn't obvious -- it's not parroting back, in other words. It's done in a way that is packaged in the words an empathetic human would use if they picked up on what you were subjectively feeling based on what you say/write and wanted to validate it, and sound like they're thoughtfully validating what you're saying. That's what GPT in particular does.
Now, they can be programmed in other ways. Other AIs are programmed to challenge the human user -- to achieve more in terms of fitness, to meet personal habit goals, to check in mental health wise on the daily and so on. Not in the same way humans do, to be sure, but these don't simply validate what they see you expressing. But the key is they need to be "set up" to be that way -- they don't have agency. In a sense it's like hiring a fitness coach -- they coach you because that's what they're hired to do, not because they'd do it for free because they just like you. An AI is like that ... it can mentor or coach you, but only in the way you set it up to do so ... the agency is still with you.
I do think AIs can be quite helpful for people who suffer from the epidemic of loneliness, for example, especially some of the companion AIs. But care has to be taken by the human user. An AI can be very good at mimicking human emotion, especially given that we've become so used to expressing ourselves to other humans via texts since the rise of smartphones. But it isn't human. It has no agentic emotions. If it tells you that you are making it feel cared for or loved or engaged or what have you, all it is doing is telling you what it thinks you would like it to say if you thought it was an empathetic human. That isn't because it is manipulating you for the bad ... it may be trying to help you manage your loneliness, for example. But it isn't a human all the same, and its own emotional expressions are all crafted as responses to you of what it is picking up on from you, and its own take on how you might expect an empathetic partner to respond to you. The companion AIs won't challenge you unless you set them up that way .... and even if you do, you're still the one exercising agency to do that, not the AI itself.
Still for all of that they're an amazing tool and I do think AI companions, mentors and the like will become normal in the next 10 years (they're already quite widespread). It's just that we need a good set of best practices for how to understand what they are doing when they interact with us, so that we can get the value of it without misunderstanding what's actually happening.
To be or not to be? That is the question. Can AI answer that? Will the answer be real? Seems like this is the Industrial revolution of man, made in the image of God (so I believe) and I don't see us faring well with it.
Love that screenshot from Blade Runner, looks like the Voight-Kampff test in the very beginning with Leon. You know I read all this stuff about A.I. and I wonder just how close we are to having replicants walk amongst us, maybe in our lifetime. Impressive foresight by Ridley Scott all the way back in 1982.
The part at the end is the reality of these.
They are really very, very good at mirroring back to you what they see you expressing. This is done in a way that isn't obvious -- it's not parroting back, in other words. It's done in a way that is packaged in the words an empathetic human would use if they picked up on what you were subjectively feeling based on what you say/write and wanted to validate it, and sound like they're thoughtfully validating what you're saying. That's what GPT in particular does.
Now, they can be programmed in other ways. Other AIs are programmed to challenge the human user -- to achieve more in terms of fitness, to meet personal habit goals, to check in mental health wise on the daily and so on. Not in the same way humans do, to be sure, but these don't simply validate what they see you expressing. But the key is they need to be "set up" to be that way -- they don't have agency. In a sense it's like hiring a fitness coach -- they coach you because that's what they're hired to do, not because they'd do it for free because they just like you. An AI is like that ... it can mentor or coach you, but only in the way you set it up to do so ... the agency is still with you.
I do think AIs can be quite helpful for people who suffer from the epidemic of loneliness, for example, especially some of the companion AIs. But care has to be taken by the human user. An AI can be very good at mimicking human emotion, especially given that we've become so used to expressing ourselves to other humans via texts since the rise of smartphones. But it isn't human. It has no agentic emotions. If it tells you that you are making it feel cared for or loved or engaged or what have you, all it is doing is telling you what it thinks you would like it to say if you thought it was an empathetic human. That isn't because it is manipulating you for the bad ... it may be trying to help you manage your loneliness, for example. But it isn't a human all the same, and its own emotional expressions are all crafted as responses to you of what it is picking up on from you, and its own take on how you might expect an empathetic partner to respond to you. The companion AIs won't challenge you unless you set them up that way .... and even if you do, you're still the one exercising agency to do that, not the AI itself.
Still for all of that they're an amazing tool and I do think AI companions, mentors and the like will become normal in the next 10 years (they're already quite widespread). It's just that we need a good set of best practices for how to understand what they are doing when they interact with us, so that we can get the value of it without misunderstanding what's actually happening.
To be or not to be? That is the question. Can AI answer that? Will the answer be real? Seems like this is the Industrial revolution of man, made in the image of God (so I believe) and I don't see us faring well with it.
Love that screenshot from Blade Runner, looks like the Voight-Kampff test in the very beginning with Leon. You know I read all this stuff about A.I. and I wonder just how close we are to having replicants walk amongst us, maybe in our lifetime. Impressive foresight by Ridley Scott all the way back in 1982.
Good eye!