Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's not A.I. that is the issue. It's us.
A.I. could solve so many problems if…
ytc_Ugxs3k4aM…
G
Yes, 99% of the information the AI will provide is information you can find thro…
ytr_UgyM7egx7…
G
This sounds like the Short Circuit movie from 1986 about a robot being alive. Th…
ytc_Ugx81y9hm…
G
I come up with guitar riffs that I love and normal drum programmes just confuse …
ytc_UgxnWmNy1…
G
I think his existential conclusions are way wrong. We arent going to get cheap …
ytc_UgzuD1_0O…
G
My optimistic take: One that I feel pretty likely, based on Human nature.
AI w…
ytc_Ugxe6ya4s…
G
AI is a mirror. let's go with that. Most of you can figure that out. Watch the c…
ytc_UgzQGXRkf…
G
This is exactly what's playing out in offices right now. Someone I know used AI …
ytc_UgyDe3A_f…
Comment
Simple solution, AI is inauthentic, deceiving, and most obvious of all is that its a compilation of programming pretending to think. So just dont use AI unless its for something super insignificant because we obviously cant trust AI to do a job the way a human would, it might be more efficient at work, but its not a human, now this is where it turns complicated since companies always want to continue to "innovate" and AIs are just going to continue to be created and improved, but what if people just stopped using AI as a whole?
youtube
AI Harm Incident
2025-07-28T18:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgweufPuF9VahCY_Xd14AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw3ZkoW2jS9eopygVZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzt7BD4_IJvv5oe-2F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzjEsVzz4drHXk-FUx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwTSVjEZKDhomAWxT54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwUBwSSuBKKyb7KNbx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwCTpm5JuSHV-fFZQx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy2IQYmbsWhqiRoWyF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx_Zx2Hc655eWzydK94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwQYCoHUqnTcr4i4tp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]