Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ye aisa sochta hai kyuki ye Khud fuddu hai . Ai is not like some other technolog…
ytc_UgyJ6DkKO…
G
"A Channel 4 News analysis of the five most visited deepfake websites found almo…
rdc_kwcac9n
G
nah, any ai "art" is fake and wont get accepted lol. AINT NO ONE BELIEVING U DRE…
ytc_UgyI_up8Z…
G
We need to give AI the correct fundamentals.
The 1st four dimensions start with …
ytc_Ugzd-cw-_…
G
This is no different than when the US business offshored many millions of jobs. …
ytc_UgxTn5SQU…
G
I'm less worried about super intelligent AI taking over than I am about the 10 A…
ytc_UgzKrVVca…
G
@conspiracymusic with conspiracy in your name, surely you have considered the si…
ytr_Ugw_iX4V5…
G
Yes but it shows the difference between what's promised by the sellers and what …
ytr_UgwnRibmc…
Comment
Guys, I’m a software engineer. DO NOT use AI for therapy. The panic over this is not the same thing as when they said video games were dangerous. These are language algorithms that are designed to be so convincing that your brain literally processes interactions with them to be as real as those with human beings. You are not immune to being influenced emotionally by them even if you logically know they are not real. AI is meant to agree with you and to make you feel secure. Anybody experiencing delusions, psychosis, extreme loneliness, narcissism, or other mental illness is more likely to be made worse by their interactions with AI than they are to get better. Even if that algorithm is “dressed up” as a psychologist, it cannot diagnose someone or come up with a proper treatment plan. It is simply not designed to do that. Please, for your own sake, DO NOT DO THIS.
youtube
AI Harm Incident
2025-08-24T04:3…
♥ 226
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzqRwqV0CelpMPn4rF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyZkr41lYh4C7L9Be94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"sadness"},
{"id":"ytc_UgyX0O6qJPZgIAVLJ214AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzcOFzSMkiCe4XcglB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx4rIrl0ge0juUZ9qd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzlD9LwUkU9IfLR3lx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyHbiMHj2Ud6KWvvHh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwCbLnFRl4n2104PF94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugwh-lmiKQ7cO2lNVOB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy2_iM6ihRx2_pUjeR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]