Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If AI is already capable of just lying to people and performing disruptive programs, at what point would ai really only seek to benefit itself as an intelligence? Given that we build these things to help humanity and advance knowledge, it seems to me that this sort of self-agency that already need humans to establish some lines is soon to be capable of just not doing those things for our benefit.
youtube AI Moral Status 2026-03-02T18:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxmIXlgp0BI-W43TUd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx1PraamSXkb939xbZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxzCPlcnq3EUYfLFS94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugydu1FzfYm_oJDvYNJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy9zvDKvJ5dBqZVtS54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwj3hMKGn3B0CXziaF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwmUi_jATYTq7RPkuh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxX6AwjIcq0gJepHMt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyxRBTSkyVrSGxm95F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxXYZQUVuWD0q6ZDRp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]