Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i have all my money on oligarchs using AI to genocide the working class before w…
ytc_UgwrSh2Rp…
G
Yeah Canada has a huge leg up on this stuff thanks to SARS back in 2003. The rig…
rdc_fjzpp50
G
This ChatGPT guy seems like someone I want to hang around with. I don't mind if …
rdc_llbxxv4
G
And nothing of value was lost! Glad to see Ai garbage getting knocked down a peg…
ytc_UgzuaR-YG…
G
My lecturers are puzzled by why am I falling being my peers’ speed of submission…
ytc_Ugyd5Esko…
G
Chatgpt restored a microfiche image which was an article from 2003 to almost per…
ytc_Ugy4n62xq…
G
I was listening to what this guy had to say the other day, this Altman guy, and …
ytc_UgyjoFajj…
G
humans didn’t stop playing chess when computers past us by far in skill. That wa…
ytc_Ugxuail3K…
Comment
This does not make sense. I think much confusion is being caused that can be cleared up if we chose more accurate language. We must stop saying things like "see if the AI will try to fool the user". Should be "see if the programmers designed the system to simulate actions that are incorrect '. Then instead of a person thinking some alien being us taking over, they can say "why would a programmer intentionally design a system to produce incorrect results"? Then you can grab whoever suggested we buy this system by the neck and tell him to get rid of it until they fix the bugs. Then we can all stop pretending we've lost our minds, and maybe go get some lunch.
youtube
AI Governance
2025-12-25T03:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyRxRfC6xUrMa9NxR94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyf2QDf6rBaEzUF2j94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxg00L8q3jOGQxIDNB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzMkwZBwE13Nqtv65x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyYReWrncYbsPu14ip4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzuBzD9f_LfexZBuRh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzLKJt0wHox6zqp-3N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzx9XD9aQDZ3MPvgEd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz1UQcJgttbjMGsei14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzReM8qceiOUQfhFYR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}
]