Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Wouldn't it be sufficient to use AI to figure out how to exit the simulation BEFORE we reach AGI?
youtube AI Governance 2025-10-01T12:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyioraT0Gum_cSgpPJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy5uvBHGd20PdH6oVR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzC8rnovFtW3jjZnmd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyMgB_AKhK5Yeq6tB14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzi1ppnwe6X4yEjptd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzPTzws_pX5vlYdP1x4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwZjQlV6f_S0-ZSGxt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzjkpNcCwPbD3OjOcx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy9EhOyfeSSJ7ERWIh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugymqe4tYCyCSGfK-w94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]