Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The scariest thing is AI doesn’t have to be smarter than us. It just has to be smart enough to reproduce itself(or continue itself) better than other forms of “life” and it has to be able to kill you efficiently which is already the case
youtube 2019-02-07T17:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugwndn9VLd4GLn39mX94AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_Ugyq_QPXK0mxLRcluaZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzigAqffohr-3LSePl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwXbhZiF8uM75nG22N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgximfIqgA6q3jeTdUp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgznGGmHNMst-SrRPMN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgxeN_VoBrmug--gKYh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx4iRxVvmnzvt4BSf14AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy3_9sFuu1M4d6HEap4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxNaJM0rFzDoqpBJ6h4AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"outrage"} ]