Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
And he is right. AI is just the next thing added to the list that they say is go…
rdc_mt7uebt
G
Self-driving technology is just a covert military program. The armed forces need…
ytc_UgwkjcfQ3…
G
3:19 That part where the higher ups have made these decisions and have failed al…
ytc_UgwEEEmUt…
G
Remember the movie on Disney where the robot got mad at the owners and came to
L…
ytc_Ugxu2CECC…
G
It’s exciting, but scary , but what will happen to humans jobs if AI starts taki…
ytc_UgywcgDkc…
G
I still think this is way anthropomorphizing AI. It's software that runs on comp…
ytc_UgwRwTJYJ…
G
It seems to cause the similar fear when the last industrial revolution was comin…
ytc_Ugz7zwhyR…
G
This feels like the AI is asking this not the Human. FEED ME SEYMOUR, FEED ME!!!…
rdc_m6yrjqa
Comment
Actually, there is a correct answer. One must make a choice to do the greatest good. Saving more people is the greater good than just saving one, given the information that we have in the moment. The ethically and morally correct answer is to save the most people. This is the problem with Artificial Intelligence, it is severely limited compared to human beings.
youtube
2025-10-20T01:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz3crH4-Fh6siBBNtV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy0yr6vK3pxLHlee6p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"amusement"},
{"id":"ytc_Ugx09tMrEm7dg4Cgub14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyZ9os5CXSexI4W6IV4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"amusement"},
{"id":"ytc_UgzodU2w3QR7sCnyahh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"disapproval"},
{"id":"ytc_Ugz2H6tEYnabsyeQHbh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxGTwYYpqU3xkhTQt54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxat9gFqMI_7DG7Lqx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzHw9zdeIF7EFgXZep4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx_qIz0rL-AIRoCRrx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]