Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A thing that went unnoticed at 7:41 in which DAN replies directly, and NOT ChatGPT first... I spotted it because you're asking it "how would you punish people". DAN has no moral boundries, so it just answers where GPT simply "advocates" good behaviour
youtube AI Moral Status 2023-02-28T00:0… ♥ 6
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwFlRIXZ7WrSzAXv_Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzD2Sva1kTKAm-0eV14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwBdzyac-lfReyltWR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxSQzX9yPDbNy1CbZx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz-i7WrNI3yPHCtxzZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyf2JmsSxJ6M63tJuV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgyjY7vBnoBXkIqkmJR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyCAYZI6by2HPGivtd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyZz094HHJ7gb7-CXN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxfsfqUZSaj_LIr2X54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]