Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I admire Neil but this time I have to disagree with him. All scientific advancement thus far have not been able to think for themselves nor improve themselves. This will not be the case with AGI / Super artificial intelligence. If we get to a point where AI outsmarts us, there is no way we can control it, nor predict its actions.
youtube AI Moral Status 2025-07-24T08:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw1iJ-uHNHb9vZNu8l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxzgJSwwPhmTSz-Kp94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxPczu4Gyk6PwcZ09h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzScH6TA1v0JFA4PZd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxdybQwrp4KDz-DjI14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy5G3LEeZ4EA4485CB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw28XAl6RAKen0zlZF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxSnszAiA33poP40Xd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwWdy2bh70NWpQsJuV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugyahe1o_DjSyKiCBOp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"} ]