Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The commenters, not all, are missing the goal of AI. They are right now striving to get it coding well enough to be able push Ai research forward much more quickly. They call that “fast takeoff” I believe. When they can do that, AGI can be achieved more quickly. At least that’s their strategy as I understand it.
youtube AI Governance 2025-12-03T15:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgxciDC2l06UW2HCAfF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxwdtfXxVxZK4FJJNl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyOUaXJHsgQ0SypwiZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx7P_JO4z_vMyEPvg14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw3DRxGJm4IK3zYcE14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz5cv4hBFCqWVHnlrJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyoA2oxS-imDJ4A-Yh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgwMVMeNrfkCqywn6V94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzdwW-2cc5vICkEGDp4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwXP0qXv6E2VjakyTx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"resignation"}]