Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The question where he says it told a lie knowing it to be a lie is where this stops Making sense. These AI systems don't lie intentionally. They produce the most probably response based on the data they were trained on. So if they lie, they don't know they are lying. That's why when you correct them, they will apologise and take your correction.
youtube AI Moral Status 2024-08-11T22:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwchQHT50Dc-N9HEv14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy0PlpCoRA1q3wG_bd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxtyMuId8zZK62k-d14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx8iXhQluUjhlSR3rh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz2X6pkXatb7Xlwb-14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyIiWN2_JnwsR_2Qqt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzStQy_h9qW03lZJSl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgwZBI3LS2Gd5RGbNH94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyy6DTmNjw9TNMBzgJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz4DL5zLji3YyZDmuF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]