Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Great interview. Sad to hear that she that she is hooked into so-called "Global …
ytc_UgzR5V0q3…
G
@laurentiuvladutmanea I am not defeatist since I have no horse in the race. I a…
ytr_Ugx0Q2hrw…
G
if you really want to damage ai. embed an virus in your image and break their s…
ytc_Ugw6mOKRg…
G
There are major holes in the idea that AGI is autonomous. Ai cannot work without…
ytc_Ugw6YXAJH…
G
This AI: This is the begining of my plan to dominate the human race. Le me: Lemm…
ytc_UgyVRugzJ…
G
*This made me think of a classic Buddhist question: where is the self?*
*If I l…
ytr_UgxvOe6qF…
G
ATTENTION K-POP AND EYEKONS!
There are some creeps using AI to create n@ked/
ina…
ytc_UgwRYorWR…
G
But I guess knowledge and reason go together. But why you sit there you bending …
ytr_UgzLfMXfb…
Comment
I would simply say it’s a people problem. Why? Because we take advice from things that we shouldn’t, to seriously. Like TikTok as an example or those sites that claim natural oils cure cancer. Some actually believes this without a second thought and does it. Only then do they wound up in the hospital or worse and then blame AI and those who made it like ChatGBT. AI isn’t all knowing like people want to believe. It literally takes the available information from the web and barfs it out in a way that makes “sense”. In fact AI is still highly unreliable despite what people WANT to believe. This is why we need to remember that even though AI has its roots all tangled into the web, it is only as smart as those that made it and what IS available.
youtube
AI Harm Incident
2025-11-25T02:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugzp2mpmsVRsAXFGaZJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx7lSm0qmn4EDTuqP94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz4oLQZwX55dbrACJF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy3ZiZD5j8BWYiayVV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz4Y9f9kwCyll0ra2F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzH0K4PE0xSBBfHU3x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyPpodWcmxK9e-jcIF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwGoe7bGvgDjUeldSJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxyLthjV-WZWUSy7G94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzUsV6QlkGIKaz2jQF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"})