Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I feel this is being exaggerated, it might hinge the extreme sides to some extent however. Some kind of doomsday saying. A realistic scenario to AI development is that more control gets imposed eventually. Probably even on the cost of rapid development if that ever seems necessary to the expert eyes
youtube AI Governance 2025-06-16T14:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyregulate
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwnVSuzSOjrYdqWtdl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzkb0JgYMNNYS4Bbah4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyxSR6EJKMTP9gN_Rx4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxRHJVsTqufHWovB1V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzpfELacn4dGlfkBb94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw95Kev7pLCn2xahL54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxh0s_jNSrT_Ujhwb54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz04yGUE4Weo7XymBd4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugw_q4uWMeHz7qZvZ3J4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgydxUUgU2wIVK651ZF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"} ]