Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
29:57 I go with focusing on creating this technology before China. The idea is the first country that has access to it will have one of the most powerful things known to man and if they decide to use it to invade or exploit, we’d be screwed. If the super-intelligence ends up going rogue and decides to destroy everything in its path, we should have our own super-intelligence or any technology AI or not that can counteract it. It’s like a blueprint that allows you to make an antimatter bomb gets created and you are debating on whether or not you should make this, all the while you know your neighbors oversees is not only in the process on creating this antimatter bomb, but after its creation, they might threaten to wipe you off the map with it.
youtube AI Governance 2025-08-26T16:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgxdpJSeUtp8sr5d2fN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgywI0G0wPgDP4Xn1hl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwfZVoQhKmdpd2fT0J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzvy0KFcG-q21C3ZJV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxB0c3uFNFfd8TXkKV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]