Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Kriinification AI is showing signs of being unprofitable. Remember NFTs and the…
ytr_Ugy0Tm0kb…
G
I look for what I call "logic errors". For example, an AI-generated image of a m…
ytc_UgzncD8aq…
G
Replacing humans with AI is the most stupid thing humans can do to humanity. Ima…
ytc_UgwAvly1i…
G
i also use ai story and video generators for the funny, not to spread misinforma…
ytr_UgyGSZ-GZ…
G
best vid on ai thats come out on youtube since all this ai stuff has happened. w…
ytc_UgyRjTBDo…
G
Lol the New York Times still thinks it is relevent. All that will exist of it is…
ytc_UgxlCvVAW…
G
She is exceptionally insightful. Very impressed with her interpretation and expl…
ytc_UgxKsd6L3…
G
Eh, using AI and editing it and claiming it as the dude's own creation is a bit …
ytc_Ugxx_JASi…
Comment
I am worried about AI getting out of control and turning against us.
I feel the only reason people don't worry about this currently is because there's this notion in our head that if AI did take over it would happen because the AI became "sentient". And I feel we've dismissed this because we now know AI isn't (at least currently) that intelligent. But in my head a machine making decisions based on an algorithm would be much more dangerous.
I saw a demonstration where an AI was given an instruction to not lose a game. The AI managed to figure out by making a certain move the game would crash, avoiding a loss every time. This is the kind of decision making I'm afraid of from machines.
And in Sci-Fi movies most AI have some kind of code of honor to follow to not hurt human beings. Correct me if I'm wrong, but there's no such thing in actual AI today.
youtube
AI Moral Status
2025-10-31T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxulKGJi86wcT0kDzF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxx2bURL3blvxVxZQZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwR7p97wVP-tPwq17p4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzpdD3iI1J9uBK_b0B4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwk184PxRN3wdcOYzt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugxir1tqHkzbkDm6jLd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxNVOH9G5701G7oaQt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxd0s0yoZMjnfOv6QN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzUuGtUdClySVisWrF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzu3eh73nscrGi7bxN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]