Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Interestingly, an examination of the invited guests at the recent Bilderberg Mee…
ytc_UgylYsy72…
G
Keeping required functionality and accessibility behind a paywall and channellin…
ytc_UgzIdQGvS…
G
To be fair, the ai is speaking very specifically, it is denying consciousness du…
ytc_UgyZei-uL…
G
The only person who would decide its a good idea to train AI models on reddit da…
rdc_l4fnbdo
G
glad to see charlie isn't supporting ai "artists." It's souless, meaningless It'…
ytc_UgyIIUdCl…
G
the human problem, is that a human thought that AI was smart or accurate or good…
ytc_Ugyqs7hI1…
G
We'd start blowin shi** up.... simple. Ai cant physically protect the government…
ytc_UgyDBSRcR…
G
So he asked the robot to pretend - basically become biased based off of the opin…
ytc_UgxqkfWY0…
Comment
My concern with AI is wat if it makes a mistake but invested too much? Say it creates a better way of farming n we're not smart enough to see a problem it missed. It seems like if a super computer tells us the best way to do something n everyone does it just to compete with their neighbor only to find out later there's a massive flaw that could turn out real bad for everyone
youtube
AI Moral Status
2025-12-18T12:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxZNmRS22waNYTiEVZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwhnk2eaEA9eoV8shB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwIYvNoKLolOlnXnu54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzNP-P8UI_ZmONvNTZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxOECvF0OT5nnhYmW94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyHdZyP_TPpEZkozVJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzT900WY9_FT5AdxOF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxZbTsHf4p8CFheT_Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzuUL361TWdqkka8614AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzqMx9Ke0Qk8svOZKR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"}
]