Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Given a long enough time horizon, it's really hard not to agree with Roman. To suspect that there is an "eternal hominid kingdom with AI babysitters" scenario that exists 100, 1000, or 100,000 years from now is pretty silly. I best we can hope for is a merger of some kind but this inevitably results in our attenuation also.
youtube AI Governance 2024-09-16T19:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UgwNibiZcxAecx8Hz0p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyT48GJVYWoO2nhfs94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxxVYoBbCobpTTYiAp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyZg1_uevZBqamD1-N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxR44LsLdE4GreX8mF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugxx2lyJ5ysm9RdqVGh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzvDis2W9oJvI1kmgp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzQE48s72ufIqyjtPJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugxc36FBmthDZLbLskZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyp7JE_Xckd1uoaZL14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}]