Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
An ai artist is just someone who gets paid for ask chatgpt for other lazy people…
ytc_UgzJQGzgL…
G
AI is not smart, the definition of smart is not how you describe it. Finding stu…
ytc_Ugx76uGwS…
G
Another aspect of big tech and the AI boom that is not discussed. Governments wi…
ytc_UgwV_WI4T…
G
It is way too soon for AI to have such a thorough immersion in our workplace, le…
ytc_UgyS9NujA…
G
In all honesty none of us will really know where the industry is gonna be by 203…
rdc_ohmv4am
G
These are some great arguments against AI that I wouldn't have thought of. The e…
ytc_Ugy_GJKqR…
G
im convinced you didn't watch the video before commenting. the anti ai sentimant…
ytr_UgxtXMp1g…
G
There actually is a happy ending to this. Just because the AI said how it would …
ytc_UgzIjBh_k…
Comment
AI will always, forever, be able to give unsafe or untrue advice. Even when it becomes able to reason, and find “truth”, it will still be based on human studies, which can also be misinterpreted. AI has a hopefully great and interesting future, but taking its outputs at face value without thinking to do further research or apply critical thinking is a failure of our (global “our”) education.
youtube
AI Harm Incident
2025-11-26T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwXj31gZXjyfnR-8up4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz95s8kc4TgnNNi_IB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyjlcVDkQq3j3BRrE54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugx-hZdCuHMWwxGDycV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxNfC7qVMPQcwSB0jN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugysiw5QjG6QDLAznrV4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx2ANk3EIvvzbvkY5B4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw0vPJtRa0pcmTh0lF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwb35NSCRp1OZc1CsV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxayw4_NQ-AG2hDevR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]