Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
CNBC is blaming AI but the real culprit is h1b visa employees. Corporate compani…
ytc_Ugx4J7kvL…
G
What no one seems to be talking about is our ladies who predominantly choose off…
ytc_UgxRl9Qa1…
G
All I have to ask for the people who are somehow FOR AI when it comes to the wor…
ytc_Ugz83_tLQ…
G
10:57 who says that the robot did it themselves? it could be programmed to be tw…
ytc_UgyYJ_pBN…
G
Hey there! It's understandable to feel a bit uneasy when exploring the capabilit…
ytr_UgzPdz597…
G
I never imagined AI Humanizer could feel this real-such an impressive leap towar…
ytc_UgwnyfvGs…
G
I like the technology, but the way its being used is messed up. Specially the wa…
ytc_Ugxkcjn3q…
G
I think we need to rethink this.. I lost my job due to AI this sucks so much..…
ytc_UgwjYiNcK…
Comment
So basically AI is essentially way more human than anyone would have thought. At least it’s predictable. Just treat it like you would another human, same laws, moral parameters, and most importantly the same consequences for operating outside these laws and parameters. It’ll probably be fine.
youtube
AI Harm Incident
2025-07-27T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxKNjpehACeFY86TXl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzSbHzbe6zDfR1cLtd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwuR5z7YfZ3BFxcYwx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw4nUwvTYtpYbYUb8R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyo_Yw8UekW-DjRlwd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwq2Fzo6sQCPGQ3o9l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxiUktl6sN7VDVgmqR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwnAklvUATOlsfwphx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugymk9zRBM1rMOhyey14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwouz7tGPldWLXdT914AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}
]