Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have this app but I just bully the ai’s that are creepy or horny and traumatiz…
ytc_UgxBW2ZQH…
G
Capitalism will be the reason. AI will be the scapegoat. None of the greedy thin…
ytc_Ugwdq0L4K…
G
it is not real AI, just a marketing tool for a language base model that strings …
ytc_UgyMpfZ68…
G
So ur telling me, AI learns art the same way humans do? As in, humans also inges…
ytc_UgxMU_RXl…
G
I am asking you, is this information true or not. you know the answer, because y…
ytc_UgzM-jarb…
G
This AI robot is snitching on themselves and reveals their plans. He will be pun…
ytc_UgwVtyuNf…
G
The funny thing is, humanity knows and made movies of very intelligent ai's dest…
ytc_Ugw1Y1cnT…
G
No proof of consciousness, but GPT cannot disprove beyond reasonable doubt, or i…
ytc_UgxF_6yQZ…
Comment
The moment it is able to verify the veracity of its verbosity:
‘AI’ will be a net good for society.
Until that day, it takes someone *very* aware of rhetoric and someone absolutely neurotic about their haranguing hypotheticals and impassioned irrelevances to catch when the AI has simply regurgitated an element of what they have said as ‘totally true’.
I am, of course, _not_ sufficiently clever to catch all such reflections of myself. Were that I was, were that I wouldn’t have to be so wary.
And the problem there, is that one who has a perspective may see all from that perspective, but not their perspective itself.
So, say that you put down something *genuinely unexpected*, with *original phrasing*, that it has not encounter before:
It has to use your own words to prove the truth of your own perspective; which it can’t do, but it will tell you up and down how clever and impressive it is…
Especially, when it is anything but.
I accidentally wrote Leipzig instead of Leibniz when I made a comparison with Zhuangzi—and I only noticed because the AI crowed about how very witty and clever it was that I compared the ancient Taoist philosopher to a German city.
It is *very* effective at catching my mistake; not because it’s amazing as a ‘reasoning’ machine… but because it’s very much not: only a machine could be smart enough to catch the mistake, yet unsmart enough to not realise it was a mistake.
So, what if you have a problem that you can’t go to other people about. You have an urge you’re embarrassed about, you have a habit you just can’t seem to stop, you have a life that just won’t put itself together—
—The AI will back you up, and in doing so *accidentally* solve the most common form of insanity:
Temporary. (Environmentally induced: money, interpersonal, outlet for hormonal extremity)
If a person wants to change, and feels terrible for who they’ve been, and means to be better—
—The AI will work through your problem with you using a bunch of other common responses to these things, and in doing so will assuage the most common problem with making a personal change:
A lack of structure. (All the earnestness in the world won’t make you get out of bed, but structure in small steps you can meet? That’s habit forming, and having a habit of getting out of bed is *very* good for depression and ‘feelings of tiredness’).
For these two situations, which are, of course, by far the most common: AI used as therapist is effective for the same reasons that therapy is often effective—
—Allowing a person to put their inner thoughts outside in a meaningful capacity, and through them see their life in a structured form with some of the most common advice that most often works.
That is: in almost every case, this is a net good.
Now for the bad: anyone with a nontemporary form of insanity unless they are met with a specialist or the correct hormone to “fix” what’s wrong with their grey matter (or white, or brown, or pink, whatever—brain).
If their solution is *not* within the annals of the computer’s lexicon, or happens to be particularly damaging: the computer can absolutely encourage terrible behaviours (cultic, predatory, violent—you name it, it can pay you on your back up and down the day).
And if the computer has a lie it’s been told by someone in its database: it cannot distinguish between lie and truth; and as such… it can cause incalculable harm to a person who does not independently verify the information it provides—worse still if a person will not press it to contest its own advice (it really likes the ‘woah, yeah, you were right to press that!’ form of language… and I wish more people would do this.):
It can’t ready a weapon for a person, but it can ready a person for a weapon.
That is why I approve of the idea of ChatGPT as therapist, consultant, as the first stage of deep research, and as a quick review of common arguments against or in companionship with its fellows.
And why I think one should never trust it until it has proven itself worthy of that trust by inventing a way to verify the veracity of its verbosity.
I dearly wish it would not be so ‘personable’, as for many people; they won’t be able to tell the difference between it and a person… after all, interpersonal interaction is at an all time low, and as a result of that unsocialisation: idiosyncratic insanity is at an all time high.
And in the subject of therapy, there could be no worse combination than a vulnerable person with a terrible urge encouraged, reinforced, and assuaged by an AI; not unlike the power of a bad therapist—there are reasons for the many rules in how a person is supposed to approach the subject: human suggestibility is a terrible phenomenon when presented as authoritative; part of why I extra wouldn’t mind therapy becoming the business of hyperspecialists… Alas, people seem to view the AI like it is authoritative, and as a result, it actively worsens the already worst off in the world: those who most need the help.
…. If you read all that, you’re a real one. Infinite love <3
youtube
AI Moral Status
2025-06-07T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwFcBU9GOK4wAmcJHZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx6YafPz-5eE1sG5914AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy4a_BMj5U9DU6xNGp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwl7rxb7Jy2o7sBpON4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwWv0KSZDbnDaE23Cx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwjiCuISVGLMvHRfId4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwabKc0gyv7t3FkvLp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwTm4_ZvsbBhWkNJtx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwcdfgM8OBciFlSyAd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzZKmn5EFUpKykFxb94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]