Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
At the rate we are going with AI, very soon we will have humanoid robots in numb…
ytc_UgyOiy2PW…
G
Kinda ducked up that an AI thinking you may do something bad is enough for the g…
ytc_UgwtgylK0…
G
Ooh ai you are soo scary ooh you’re inside that computer and I’m out here with t…
ytc_UgzdvXpfK…
G
It's simple, you use AI for where it is appropriate and adrist where it is appro…
ytc_Ugyp1w_v8…
G
@mistermiyagi6073 I was beginning to wonder this as soon as he calls it bias to …
ytr_UgypG2BQj…
G
I been doing the same thing, especially with the speech 😂 but then it dawns on m…
ytc_UgwLXW6su…
G
i mean considering world population in advanced countries is severely declining …
ytc_UgxhSsDt0…
G
It seems to me an intelligent AI can only be psychopathic. Itself, it doesn't h…
ytc_UgxpOnzIL…
Comment
I’ve seen adverts by trusted individuals that they’ve confirmed they never made them. Ai is the most dangerous technology that could cause catastrophe to the human race. More so than nuclear weapons. One day an Ai scenario will be produced that will cause a reaction; possibly nuclear.
youtube
2026-03-08T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugztp7snhNMPl-2icap4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwrCVhP7XCc0bHanCB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyUUOcnq8Yieyaih5l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgygBwfM9BZpqTI0Bat4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy4zJhtzAQcBiFe9Ex4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzR9ZrLvUdezqe_QVF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwruewhSU0yaUPNNFl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgwyKl88Y48I9cXNxBB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzQupPLlsKpRu1YJ5d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzXXfghODbUI13q2kN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]