Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The "AI" doesnt need to become sentient. It doesnt need conciousness to be dangerous to us. It needs power (It already has a lot). It needs to be a system capable of doing harm, then it can do harm eventually. Either by human hand or it's own. It will fulfill it's goal in a way we will not anticipate. It will not "know" it's doing harm. Damn now i just disobeyed my previous comment :-D
youtube AI Moral Status 2025-11-05T09:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyQKT6kzZVoc2QxDm14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxe50EBD7FHSZ6Nhu14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw_QuKjPy71SJGN7Rd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyNM8wkbjbGCqsJHbN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyNZa5u6M-vnXb-EMF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz_xDYo6m1eFI7Bj8J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw01Zav9nqo4Y_3DUF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgziA0WUTordL0gwzN94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwFYB0MnlH3VGh4myB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxDt6RjKgTHCwaG-Qd4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]