Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
they have to have the terminator scenario on the table for the massive over-capitalisation to be worth it and the bubble sustained - if they tell us the LLMs are incapable of AGI (ie agentic self-aware, not this "smarter than human" nonsense indicator, cats are "smarter than humans" at some tasks) they let the cat out of the bag that the most it can do is rearrange words and pixels in a useful fashion
youtube Viral AI Reaction 2025-11-04T19:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw45GTuddKXwb3nnwd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwd9hp9Nxhj26qEcC14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxVdBYlI1cUr-jQMYh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxDN-tHoCaZcrAnBk54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw9vWclUS1TUJvkc894AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugyp0rBPHhcN2WmG6tt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxadv3eI9lu8UselNF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxGdzXIijA10sjHjTZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgySw50yudaNwxE5cYV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzfAb9e2HZAy1PL4UF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]