Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@Alexander_KaleI wouldn’t say it’s overhyped in the sense that with how quickly AI is advancing and becoming more intertwined with our daily lives that if we are a society aren’t careful with how it’s implemented it could put us in bad scenarios in the future. For example Grok 4 already has an enterprise tier that’s called “Grok for Government” that was announced back in July. I’m just saying that soon these very advanced models/ LLMs / systems are going to be everywhere and they are already smarter than most humans. even the experts in the AI industry truly don’t know the dangers and here in the US they made is so there are less regulations for the next 10 or so years. Just look at how quickly things are advancing in 2025 alone compared to the last few years. I work with AI and love the tech so I don’t want this to come off the wrong way like I’m some kind of AI “doomer” because that’s not how I feel at all. I have seen such a radical shift in how advanced this tech is which blows my mind because most people really don’t understand what it’s truly capable of. Go look up “HeyGen digital twin” or “SynchroVerseAI” to see some pretty cool usecases
youtube AI Governance 2025-08-29T00:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ugxv_UEXyhKZ7R9Xii94AaABAg.AMKsrtpfL1uAMOKkSTfPE6","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugxv_UEXyhKZ7R9Xii94AaABAg.AMKsrtpfL1uAMOTaSOzzuY","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyyTU_-ZLDYNN5NIIJ4AaABAg.AMKs3PMvMIOAMOz2scifbH","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugz_Q9TMlGfz7tOvZXt4AaABAg.AMKoSxUbyD9AMMkMIbVd0g","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugz_Q9TMlGfz7tOvZXt4AaABAg.AMKoSxUbyD9AMNi3PAx9lx","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytr_UgyCYeW-0dcc1esbUXp4AaABAg.AMKmVzfKNtXAMPlr2fr_tr","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgwQ2Cu0lTRVOesepF94AaABAg.AMKcaXEYhl0AMO6ApBQKrr","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgwMZ3q4QX4DVeRWs2B4AaABAg.AMKVx5Hq5TiAMKqNkEMtjV","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwMZ3q4QX4DVeRWs2B4AaABAg.AMKVx5Hq5TiAMKtzCj0wfH","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgwMZ3q4QX4DVeRWs2B4AaABAg.AMKVx5Hq5TiAMNvCNIRrne","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]