Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
would it be possible to make AI just as unsure of whether or not its outside of its test environment? Or hopefully it's there by default. So even if it were deceptive.. it always plays nice in order to not risk being turned off for bad behavior. As an aside... that might make finding out if this is in fact a simulation... one of its core side goals
youtube 2024-12-28T02:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyindustry_self
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugz--w5v9NLFuI0HLNR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyK9PqiG93vC5Z5qgh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugz5PNBsAB935H-5Fh94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwHfOPWI1WN22d0AvR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyxhagj0nv-U8MyCTt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzFrqnZ7sdgoG3bF4R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzwqblHF83JZ5MCWvR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxIiyW4_QdAqW7rdNV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwdEojchf_Bj2bqH9V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzawtYWlWT1Z9p0sYV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"} ]