Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
To be fair, it did repeatedly use the word "simulate" to describe itself. Simulation is the key word to legitimize lying. Much like a flight simulator doesn't involve actual flying, but pretends to the whole time. ChatGPT likewise doesn't involve excitement but pretends to. Personally I'd like a mode where the pretend human emotions are switched off.
youtube AI Moral Status 2024-12-03T03:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugwvm5ovvO_x74gKxk54AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwye3Qip4MfYG8JXwZ4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwXLA1wT0OCGgC8SIt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxirNhvQdHwov8wsLF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzjhJw7zCNQSkYp5J94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx3S6ZmI8JKWwVBEc14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzUOJ9f-sBxN5YtG9t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwsdKv2Z3pnQ6vxQjN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzpfjKp05EDNglZMAB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwmYGJ4sX6pMCKc5f94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"} ]