Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Trying to predict past a singularity is almost impossible, you know, by definition. That is why AI is really dangerous. We are heading into a big chasm, and we don't know what is going to come next. And we can't control it.
youtube 2024-07-05T21:4… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugzhq9UZEVQ4xv2Biuh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwRRi9IDg9oUFc2xMJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxKAPz72r-00AssY994AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzaoachulessD5dCAR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwbSZIe7-1xzeDAITJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugwm7v9UmsmjIpfiYdd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzawXdj5oVDVXm30cF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw6uAKJoVFQpltJJsd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxrDCSHDGVPbJhpJm54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugw_cWp5PiS2aUoll-B4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"} ]