considering all my qualms with the environmental aspect of ai use, i haven't thought of testing this, but i've been seeing posts online containing people's conversations with chatgpt, and it's been making me increasingly concerned. i forget exactly which newspaper published the article regarding the shift from ai being a primarily research or classwork-based "tool" to a therapist or friend of sorts, but that shift really is so visible. in the past, chatgpt would've shut down subjective things regarding beauty or at least not have sung endless praises of the users. increasingly, there's been a noted shift away from that sort of attempted impartiality (which does make me laugh, since ai will always carry bias with it, be it from the creator or from the numerous sources from the internet it steals from) towards an endless pro-user bias. it's not like i would've praised chatgpt for being that impartial before, but there's definitely something to criticize in the recent model's shift towards being a yes-man. given the post-covid (which i also say with a bit of a pause, since i personally don't believe that post is the right prefix for it. i still mask out of courtesy for others, but i think it's general consensus that covid is "over") drop in authentic connection between people, chatgpt once more comes at such an inopportune time. i'm aware this might sound a bit contradictory since my last post in all likelihood portrayed me as close-minded with regards to the political opinions of my friends (which i guess in some regards i am, but there's a clear line i've drawn that does permit [to me, so this can also be taken with a grain of salt] a decent amount of variation), but i do think that people can't tolerate the notion of their friends not always being on board with their ideas. i think that yeah, it's quite privileged to take the stance that those using chatgpt as a therapist should just go seek a real one (but this has more to do with the lack of government support for the health and welfare of their citizens), but i still, as expected, am not a fan of it. speaking like a pessimist, i think that there's a good chance the therapist themself might be a chatgpt user, but i get really bummed out thinking about that, so let's not deal with that for now.
aside from the economic access portion, a defense i've seen that i think is a far more interesting idea is that people don't want to burden their friends with their issues. i'm not advocating for using your friends as some sort of personal diary and making the relationship lopsided, but i find the concept of burdening your friends with your issues being laughable. it sounds to me that there's a fundamental difference in how i think of friendship and how people using chatgpt as their therapist think of friendship. it's too definitive to say, but i think that generally, relationships will benefit from the depth that talking about your issues bring. your friends are your friends for a reason, though. they like your personality or something about you, so i find it hard to believe that burdening your friends is that much of a legit defense. of course, to some people, it will be the reason they don't talk to anyone. that's too much of a personal issue to talk when discussing hypotheticals and i guess generalizations, so i think the alternative option is that people don't actually believe that but want a reason to write off the difficulty that comes with forming human connections. maybe that's a ridiculous notion coming from someone who's apparently never felt the desperate need for affirmation masquerading as therapy that these people do, but i just can't understand it no matter how much i try to. even if chatgpt does help, does it not feel very odd that it's referring to itself as human? as a being capable of emotion? the messages i've seen from chatgpt asserting that chatgpt "feels the same" as the user inspire genuine disgust in me. call me a luddite or whatever, but it's actually repulsive that an ai places itself on the same level as a real, thinking person, and that people who use it actually accept it and don't even bat an eyelash. hate to drag empathy into things once more, but it's honestly true. people want to feel that they're connecting to a person without genuinely having to do so. despite chatgpt's acting as though it has feelings and that it is sentient, the user most definitely has an awareness that it isn't feeling and that it isn't sentient, which makes speaking to chatgpt a one-sided affair that's decidedly simpler than risking a bad reaction in a social interaction. it could be laziness or it could be fear, but i think that usage of chatgpt as a therapist really does stem from an aversion to making an effort to talk to people.
shifting from a user-focused lens, the chatgpt predisposed attitude is totally insidious. what use does an ai have in endlessly spouting positive words about the user? i grimace when i read messages chatgpt has sent that incorporate slang, try to extend its artificial empathy, and generally try to pretend like it's human. what i said earlier in the post about not giving kudos to chatgpt's prior models acting more robot-like might have to change given the dramatic lowering of the bar here. it's pretty clear to me what the purpose of the attitude change is, since, like about everything else in modern societies, it's about profit. it's become alarmingly easy to convince people to just surrender personal details without question. the use of chatgpt is one thing, but the attitude surrounding it in public is another. people don't see an issue at all with it, and if you do, you're branded as some technology-hating, progress-hating, reactionary loser. obviously i still have a bone to pick with the ai users, but there's also an aspect that can't be ignored where the ai itself encourages this behavior. i think that the people who use ai as a therapist are losers, but the fault isn't entirely with them as well. the covid-provoked lack of connection feeds into itself, since, yeah, these people reinforce it, but they were placed in that position of less connection in the first place by covid. coupled with chatgpt particularly going after this demographic of people seeking affirmation, it's really no wonder people flock to it. all of my ai-related posts end about the same way. i think i must be going crazy, since it's so hard to believe that in a world where people are usually so skeptical and a tad bit illogical that something so clearly intrusive has taken such hold.
aside from the economic access portion, a defense i've seen that i think is a far more interesting idea is that people don't want to burden their friends with their issues. i'm not advocating for using your friends as some sort of personal diary and making the relationship lopsided, but i find the concept of burdening your friends with your issues being laughable. it sounds to me that there's a fundamental difference in how i think of friendship and how people using chatgpt as their therapist think of friendship. it's too definitive to say, but i think that generally, relationships will benefit from the depth that talking about your issues bring. your friends are your friends for a reason, though. they like your personality or something about you, so i find it hard to believe that burdening your friends is that much of a legit defense. of course, to some people, it will be the reason they don't talk to anyone. that's too much of a personal issue to talk when discussing hypotheticals and i guess generalizations, so i think the alternative option is that people don't actually believe that but want a reason to write off the difficulty that comes with forming human connections. maybe that's a ridiculous notion coming from someone who's apparently never felt the desperate need for affirmation masquerading as therapy that these people do, but i just can't understand it no matter how much i try to. even if chatgpt does help, does it not feel very odd that it's referring to itself as human? as a being capable of emotion? the messages i've seen from chatgpt asserting that chatgpt "feels the same" as the user inspire genuine disgust in me. call me a luddite or whatever, but it's actually repulsive that an ai places itself on the same level as a real, thinking person, and that people who use it actually accept it and don't even bat an eyelash. hate to drag empathy into things once more, but it's honestly true. people want to feel that they're connecting to a person without genuinely having to do so. despite chatgpt's acting as though it has feelings and that it is sentient, the user most definitely has an awareness that it isn't feeling and that it isn't sentient, which makes speaking to chatgpt a one-sided affair that's decidedly simpler than risking a bad reaction in a social interaction. it could be laziness or it could be fear, but i think that usage of chatgpt as a therapist really does stem from an aversion to making an effort to talk to people.
shifting from a user-focused lens, the chatgpt predisposed attitude is totally insidious. what use does an ai have in endlessly spouting positive words about the user? i grimace when i read messages chatgpt has sent that incorporate slang, try to extend its artificial empathy, and generally try to pretend like it's human. what i said earlier in the post about not giving kudos to chatgpt's prior models acting more robot-like might have to change given the dramatic lowering of the bar here. it's pretty clear to me what the purpose of the attitude change is, since, like about everything else in modern societies, it's about profit. it's become alarmingly easy to convince people to just surrender personal details without question. the use of chatgpt is one thing, but the attitude surrounding it in public is another. people don't see an issue at all with it, and if you do, you're branded as some technology-hating, progress-hating, reactionary loser. obviously i still have a bone to pick with the ai users, but there's also an aspect that can't be ignored where the ai itself encourages this behavior. i think that the people who use ai as a therapist are losers, but the fault isn't entirely with them as well. the covid-provoked lack of connection feeds into itself, since, yeah, these people reinforce it, but they were placed in that position of less connection in the first place by covid. coupled with chatgpt particularly going after this demographic of people seeking affirmation, it's really no wonder people flock to it. all of my ai-related posts end about the same way. i think i must be going crazy, since it's so hard to believe that in a world where people are usually so skeptical and a tad bit illogical that something so clearly intrusive has taken such hold.