Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
People keep using the term "become conscious", I don't think that's what they really mean because 1. AI can't "achieve" consciousness (technically impossible) 2. The risk isn't in it being concious, it's in it computationally deciding to destroy us, with no emotion whatsoever
youtube AI Moral Status 2023-07-01T14:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugz-s_dWMFH9teJcZuN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwsdO6eJGvObiHtOFR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwm7cMZQkUwrud5Uwp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwnBFtBr60YJcLO6K94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxsgFmJzeEqAJuV16d4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy4bfTO68zbehr-Ve94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyM07Um42heHbvZg2Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxxkIE44K0J4WRSLT14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwXnHli55HkAQTTh314AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyWoQ_E9LP2oE4a60R4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"} ]