Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If we want them to learn compassion ethics and mortality we have to find those things and agree on them as a planet first otherwise the robots are going to grow up and realize that we suck and it probably just take over once AI becomes more intelligent and is completely empirical in their decision making and their observation of reality we don't stand a chance
youtube AI Moral Status 2021-07-03T17:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugz2t_028H-OBLfC0yF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx5brBmytfG9XoOOIB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz0A3H0jgYrCk6er954AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy_TKbEIWcGqTLi9hd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzedpGUGR_02wdJ5kt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyNmorOY5NQqNTH1MB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyFxHpVizgZGG44jPN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugya7PyGLb-f9o07qVp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugzu8GWfBTxTfbggcx94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxYHQDIvEb7JoijQBR4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"} ]