Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
bullying AI is wrong
-I say in hopes the robots are nice to me during the upris…
ytc_Ugwjd9TJY…
G
He didn't touch on the existential issues including job displacement, social ine…
ytc_UgxEL3p8T…
G
ChatGPT is NOT Artificial Intelligence, it's an way to search the Internet . . .…
ytc_UgwnIl1eD…
G
Patrick is speaking of overall value to average students.
He has a bunch of kids…
ytr_UgwUGAlRh…
G
Jobs ani a.i ki echi andariki ani free chayandi like free food free health free …
ytc_Ugw4YcVZn…
G
I don't think it's a realistic robot. I'm fairly convinced it is a real, actual…
ytc_Ugy8WQExr…
G
But how can you prove that it didn't fail the test on purpose to reduce your sus…
ytc_UgzA1uW5K…
G
@cormorantblack An old man doesn't need money, he already has enough by having a…
ytr_UgwMkgH1D…
Comment
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
youtube
Cross-Cultural
2025-10-07T01:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwyWjrupENz0if0az54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwCnQF-NZz0-1Y28S54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyvUHJVGyXXOqUQahZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyOQW-sbI2OhrV3YEl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwXYvAZrk72ouGrmcp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugy3ZokfA4489qyFF-14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxOMRA7nMqjlpi_Hf14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugzsrr8Z6q3rQWf_Bop4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz3pEWvnKw7aX4PvH94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx9B4zxaJvqycmeffR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]