Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
They also hallucinate as a “test taking strategy” which was reinforced through training. There is no extra penalty for guessing incorrectly vs. leaving it blank. So, logically you’d say “let’s just reward it for saying I don’t know then” but that’s dangerous because the drive to give an answer helps AI see connections we wouldn’t normally see.
youtube AI Moral Status 2025-11-03T14:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxY0I70Rka2xXceaP54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyPAxwLJP-9G-tb_KJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxNy79pB1c3aKBpOAJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxccUaNrjegxSJu29N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz-ltEH3nQq-ZaUE-R4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxd7Br7dyRJyWRdUst4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx69kz3yfBMpjscTrN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyVQ0oJ16qEHJrE8_x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw4EmDDbTyb4uXLpcF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzpaeXUqr1HKp5PVfx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"} ]