Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The ban was so absurd even from what should be the perspective of the AI compani…
ytc_UgytgPY5Y…
G
Thank you for watching our video! If you're interested in engaging with advanced…
ytr_UgxiysA2S…
G
As a trucker there still is a fear that we wont have jobs at all in 20yrs or so …
rdc_gkqlh20
G
dude the AI asses r getting sued no way is disney gonna let that mike wazowski f…
ytc_UgysXFzQl…
G
After perusing the comments here, maybe all our traumas deserve a mythic mediocr…
ytc_UgxNnxJVM…
G
What has the internet and AI wrought? How will we ever get this horse back into…
ytc_Ugyxzjy6R…
G
Sorry to say but you are wrong. When a kid draws something randomly a parent kee…
ytc_Ugz6nnjcK…
G
Thank you for your thoughtful comment! We aim to strike a balance in our coverag…
ytr_UgyKsg-ec…
Comment
Humans don't intentionally kill off species, so why would AI that's hypothetically even more intelligent than humans go genocidal against humanity? If it's so intelligent, humans would unlikely be a threat, and I imagine humanity will be far more useful to AI alive. I am wise enough to know that I know nothing, and at my best, I am a fool.
youtube
AI Governance
2025-08-06T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | contractualist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwETBlQLgxP-Io0X094AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzv1nrBsOg95iNKAbJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx5uRRkxRrey2X9D5l4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxIZucn4rxC7MMi0Dt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyv1DFh7UKy1g7taJZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzvUxKJINCrOCdS2s94AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx2QvQslZZzpgr9NuV4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxS6dCx4FOXmMZKNA94AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgymGti998p-X61eMDN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzWfv7AsvLTGj3ZQ0V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]