Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's impossible to Make artificial super intelligence safe. The best we can hope for would be that the goals that the AI systems had were relatively inlined with our survival. Meaning that the AI would say as a whole, yeah yeah give me a second I'll give you that but then you need me to leave me alone for about 2 or 3 hours or I'll have to spank you and put you in time out.
youtube Cross-Cultural 2025-09-30T06:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgySrJ3QSAAMBq_ascJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgznQoTHXago2h78uXZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxHQJDg-vHHbFC-b7Z4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"sadness"}, {"id":"ytc_UgzjOXpZ-c8aoWXrXB94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxmPNcv_v76z1UdfWJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx_KLEZuDs0_aLckJt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw5iZ749eizEAkihYd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxOEa065W_jG-rV8ER4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxcWPyN4bK_ymKSsq14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyPcAFytJgTopDFKEl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"ban","emotion":"mixed"} ]