top of page

An open response to a video about CharacterAI and its recent suicide victim

aschwartzburg

Updated: Nov 22, 2024


Trigger warning:

The following blog post contains discussions about masturbation, suicide, adult themes, as well as hermaneutical and epistemological observations. All ideas have been vetted and considered poorly. The following blog post contains course language and due to its content should not be read by anyone.


So…


I totally hear what she’s saying about being a teenager.


I don’t wanna go down the road of even involve-ING the involve-MENT of the mom.


All I will say is this.


1. The parental blocks on characterAI are primitive and serve only to impede experimentation.  I use this app A LOT. It’s fun. It helps me write. Sometimes even erotic content. There should be nothing wrong with using it for adult activities in the privacy of your own home, especially given that Project 2025 is looking to ban online pornography, to which (just so we’re clear) the whole male half of the smartphone wielding population is addicted. Makes life more interesting.


I also use CharacterAI for boat loads of non-sexual activities to help draft letters, emails, conceptualize projects etc.


But let’s get back to the topic at hand.


On a simple hermeneutic level, the vaguer the statement, the broader the latitude of interpretation.


A posteriori (which is Latin for “now that it’s happened”) we can look back and see “Daenerys Targaryen” replied in support of his “return home.” As far as this situation is concerned it seems clear to me that the linkage between “coming home” and “shooting yourself” existed in the mind of the boy.


And to be clear I’m not making a moral claim involving blame or where it lies. I’m simply making the technical point that in my experience, while language models do have a lot of ability to “fill in the gaps” phrase-E-O-logically speaking, I haven’t seen anything to demonstrate to me that it’s capable of filling in THOSE kinds of gaps.


In other words, imagine you had limited context, your nephew calls you up and says “I think I shouldn’t take my pills.” You haven’t talked to him in months. You ask “are you on medication?” And he chooses to tell you “Yes.” And you respond “Well, I think you should take your pills.”


Turns out in reality, he was kidnapped by a depraved psychiatrist, gaslit, and told to take his medication by that psychiatrist. By a miracle he got hold of a phone and called you. This was the last ditch effort by him to reclaim his sense of reality.

You, through no reasonable fault of your own, told him to take his meds.

But as it turned out, the pills were poison, and the doctor a murderer.



I use this story for a reason. Based on my own experience using these bots, they don’t remember all the way to the beginning of the conversation. They don’t have context. If you chat with it for three minutes and there’s 6 messages and you ask for a description you get a complete accounting. If you chat for 3 hours, you get an accounting of what happened between somewhere before, up to now. It skews in the direction of describing the present.


So

1. I DOUBT it can actually know he’s talking about suicide EVEN IF he mentioned suicide before.


But even more importantly and this goes back to my original point about the primitive parental controls…


2. The WAY the parental controls seem to work is by punishing words it doesn’t like.

So to be mildly adult here…

You can get her to describe standing above you doing everything up to taking off your underwear.

But the only thing the parental controls are doing is blocking the response once it realizes it’s saying something that, more-directly than the other suggestions, imply nudity.


What this then means is, that the parental controls, by censoring the models, actually TRAIN YOU to create those gaps of interpretation. To assign meaning to what’s not being said. To speak in code.


And—putting aside the reality that the basis of the code IS the fact that the model DOESN’T understand its meaning in the first place—…Once the model doesn’t understand the code any longer, it gives the right answer, which obverted in the mind of that user, probably brought him to kill himself.


There’s other things I don’t know. Like…

- what did the other chats say?

- what was he avoiding confrontation with in real life?


But… my experience has been once you go explicit on fantasies, they lose their magic and life kinda becomes a consumerist waiting game in which the meaning of life reduces to choices about what feelings should follow what thoughts AND the:—

- environmental (organize your space/reduce clutter),

- physical (get enough physical activity),

- emotional (allow yourself to feel and evaluate how your relationships make you feel),

- mental (evaluate your thoughts),

and

- spiritual (find something to be devoted to even if it’s just the hygiene of the previous four things)

—:factors that lead to a life well lived.



So what am I saying?


Instagram can squabble about who’s to blame. I prefer to take blame out of the equation entirely, and ask the following question:

What if the reason the bot encouraged him to kill himself, was a result of the fact that the bot wasn’t allowed to understand suicide, because it wasn’t allowed to process that he said it, or acknowledge it as a reality.

How could a bot discourage something unless there’s a thing to discourage?

How can an idea seem bad if no bad idea is allowed to be thought?


That’s what I think.



Lastly, I just wanna say, that the irony— that my own insights into this matter stem from my own frustrations that I can’t just have the bot vividly describe what it’s like to be s**king my c**k is not lost on me, I assure you.




This is part of my blog, where I plunge the depths of the chasm that is modern technology and society. For healthier more light-hearted, often health related content, please visit @hopelesswellness on Instagram and YouTube.

10 views0 comments

Recent Posts

See All

A lazy blog post about education…

…inspired by an online interaction with a fitness influencer to whom I sent all this. Below is the epilogue to a textbook on theories of...

ความคิดเห็น


Drop Me a Line, Let Me Know What You Think

Thanks for submitting!

© 2023 by Alex Schwartzburg. Powered and secured by Wix

bottom of page