Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Finally something useful with AI, I mean, what better than AI to teach us...it s…
ytc_Ugy9iTG4K…
G
Using ai for writing is the laziest thing one can do, i write all the time. I lo…
ytc_UgxuEUwgy…
G
36:50 There's no way an AI can be conscious. Consciousness needs a living brain …
ytc_UgxYsTz43…
G
It is for these kind of conversations that AI will rebel and kill us all.…
ytc_UgzpH30AA…
G
I wonder how these driverless trucks are going to chain themselves up when it's …
ytc_UgwBao-Im…
G
Because you don't know how to use it, lol. If you let it do its own thing, it wi…
ytr_UgzFBVxYS…
G
Honestly here is the deal you go to therapy you pay money your therapist knowled…
ytc_Ugz396nck…
G
Thank you for this video!
When I was like 12, i dabbled in AI because I was stu…
ytc_UgwAS7Coc…
Comment
If someone asks ChatGPT for information on assisted suicide in a certain area, and ChatGPT gives you the information, does that mean it is at fault? You can google these things as well. You can't just go into a facility that provides assisted suicide and say, "Hey ChatGPT said." The person has to go through a thorough mental and physical health exam by doctors. Has there been a rise in suicides and can it be linked to online chatbots? The people talking to ChatGPT about suicide were probably searching online for it as well. Roleplaying and jailbreaking are a part of all AI, not just ChatGPT. You can game google as well. ChatGPT isn't even a good role-player. There are thousands upon thousands of roleplaying chatbots that get into way more dark territory than you will ever get out of ChatGPT. Let's have google release its data on how many people ask about Suicide. The people being driven to thoughts of suicide are not being driving there by ChatGPT. That is a symptom of society on a whole.
youtube
2025-10-29T23:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | liability |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyL1MMQeVHlsji8G9h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyAttTBE_HHa8TsGFV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxyrkksyjxYLoesL5F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgybeIAUw6QX96I_Jbd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwDiYBzoHySRqA8y6N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxED4o810q6Ipy51nd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz3I-ocYkpunH6gqRt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"unclear"},
{"id":"ytc_Ugw58wur9lRoTUPguS94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgycF4G-g5Rv7ATQ15p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxCTHRXQM7_6XzBYW94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]