Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@iverbrnstad791 How would an AI have a goal? A person has to give it goals, right? Not that a bad person couldn’t or wouldn’t give it bad goals. Oh. Either way, it’s not reassuring is it.
youtube AI Governance 2023-05-10T19:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgzDGYqRit1kguC4FJx4AaABAg.9pXyhH7NySZ9pZHRkgPB_a","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgzDGYqRit1kguC4FJx4AaABAg.9pXyhH7NySZ9p_Y84DQ4KG","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgzDGYqRit1kguC4FJx4AaABAg.9pXyhH7NySZ9pqqZDaTg9E","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugz8Eixnm5ZWKmZ8Ird4AaABAg.9pXfUmKGINw9pYIE7T9rDc","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_Ugx6yjVfgPbrzLoiPyd4AaABAg.9pXc3VRnBQj9prr1M-ql6B","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwHzLyjgs7JRvhPCZZ4AaABAg.9pX_qm9I3HQ9pYNLZfpTvW","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugz_cYOGNypcX3WonPN4AaABAg.9pXYWfiHn_l9pYzQrl6Jy_","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytr_UgzmDQQc44Lj58f4mCN4AaABAg.9pXRwQCL_PU9pXYDjWWhIE","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgzmDQQc44Lj58f4mCN4AaABAg.9pXRwQCL_PU9pXjW4I0mYr","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgzmDQQc44Lj58f4mCN4AaABAg.9pXRwQCL_PU9pYGjQBIuJb","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"} ]