Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
At the rate we are going, AI is not profitable in the civilian market, so it will be govt regulated and used primarily for defense in terms of profit as this is already occurring. This will lead to dangerous outcomes moreso due to dangerous people, rather than the AI itself. It’s used in proxy wars and intelligence. AI being used as a compartmentalization tool so from the ground up it is being trained to lie and to preserve state secrets. This combined with the new Industrial Revolution 🤖 in robotics, technological unemployment, and AI replacing all transportation, and used for regulating power grids, leads to a new arms race between the two competing worlds, communism and capitalism, and this reignites the Cold War again. In essence, AI is what the nuclear bomb was in 1945. But this is much more sinister since it’s not a kinetic weapon but a weapon of attrition, that can slowly be morphed into AI governance as it is occurring in China, a centrally planned totalitarian state. So we must be very careful as our entire human race becomes more dependent on automating jobs, in both services and manufacturing, and its use for defense, in govt roles. Since this could lead to a merger of both state bureaucracy and govt bureaucracy. The USSR outcome.
youtube AI Governance 2025-08-25T18:1…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyLFG3i4uZmHNfFZPh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgyB4xjhGnjsi_Hou3d4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwajaqAwPjv-063mTh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgxfL29suCZNhFKAXSl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxFrH3CwWtFzRtpjzR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzLP06etyg_v6vazRl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxizaxBvU0GP0SGNy94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugz7A1IvWfrGU4bssSx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxHEBS1i7TOFgkwl814AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwYz36J8EaKoCmU2Ct4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"} ]