Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The idea we can do something to reign in AIs, especially aligning super-intelligence, is so quaint - you only have to miss an opportunity once, and it's game over. So whether it is 1 year, 10 years or 100 years before we get out played by a god like intelligence, it will happen. We should stop them building bigger models, we have enough useful AI and we can just about control them now.
youtube AI Governance 2025-06-23T09:3… ♥ 2
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugwmwq2HkwOKVz-K98x4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzN4qoPTEq3mCMXqWN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugw7pmPSccTpBDj6Ih14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwAzJ5KZQs_qY45ckp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxlZy_mtFIR3993Dsp4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxWElIu6YFy1-3wtWR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzVDZ3tOcbnRn8f_vF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugwq6dIsIbjGYdPUGE14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyML-R3NjX5FbAjDe94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzrtNkMuvI9mobMPd14AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"} ]