Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
In theory we may be headed towards AGI… however there is an “Achilles Heal” in this… it’s called Power… the AI of today needs a tremendous of power. I can’t even image the amount of power that will be needed to run AGI. Unless we are able to solve the Power issue… AGI will be delegated to lab environments and books.
youtube AI Governance 2025-09-04T13:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy34gjoLPnCEz1u1bh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyArnXOHIhVPitx7bl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyAwV6qqYRFYWN2BVh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzgS-GDbk6wVvRR0zJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgypZh8cb5oWw7VUG4V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy9rJqi--4YdTtkgYl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzVjcLIRsBUIqOInE94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz9I6EdMXkOsmpgR194AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzZcMNA4-8pXI8XSzR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwRzM00mU1BrpSStS14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"} ]