Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It dangerous when the AI can say no to human, when it’s supposed to be yes. Meaning the decision made by AI is not only based on right or wrong algorithm rather than the AI itself selects the answer.
youtube AI Governance 2023-05-03T22:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxrXA6vcDmitNtEA854AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxLaappnqF1OJ6AV-x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzAX_K82ZYej2QDfR54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxvEjVOQe-cn1ET7vl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugykc3-B2dudc9Mayj54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwe2Uuaszl6XQTp7ut4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy6CLT3r49bnBq9a8V4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugx1piBimswLu611EQF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx7DvMR9W76Vb8abJ14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugzd3RrUSn47IZnvsbF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"disapproval"} ]