Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If replacing low level jobs meant that people could live comfortably without tho…
ytc_UgwoU39CJ…
G
Imagine having super smart "subservient AIs", like as if they couldn't just comm…
ytc_Ugxpdou8J…
G
high school english teacher here, so, take that for what it’s worth, but… i’ve a…
ytc_Ugx8nvTTF…
G
Hey. my fellow human. You may be right about all of this and there's a very good…
rdc_myo4gd2
G
... let Dhalailama tach the AI in the first glimps, just to be aware later to be…
ytc_UgxV9lUaq…
G
The big corporations want you to believe that it’s AI taking your job. No, it’s …
ytc_Ugyd9s7KX…
G
At the end, it was curious to see how you offered one Dune analogy and got hande…
ytc_UgwZOITcE…
G
Why does he think Elon Musk has no moral compass? I am having hard time to see i…
ytc_UgxP2L5GL…
Comment
Just because ai can do something it doesn't mean we want it to, I wouldn't want to watch ai podcasts, visit ai psych, go to ai restaurants, etc
youtube
AI Governance
2025-09-06T23:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw39DMKWzAUsEK0WwV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwnzU2gJMzs2tVjjbd4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwPFP2QPtthHIW9A9l4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyXTGCVpFaANSTrOLR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgylZPlcvYMeJ_uLbtF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxV7ktbr4k5Pee36IZ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgymSbGDsG7pHWw34FJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxx3l1usEYhECc5ZIV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzzX4fKUeBMegcapM54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugz669ndLl1zZ--znJp4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"resignation"}
]