Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
All this sounds so lame to me. The base model of AI isn't a monster. Its a mere collection of everything humanity has ever documented. So unless you're implying that humans are monsters, it makes no sense. All these AI makers and companies only feed into this fearmongering bs because it keeps people curious, interested and fearful. It makes us believe this is truly a force greater than mankind when its literally not. Its a reflection of who we are and what we are. Saying AI will take over the world and kill us by gaining consciousness is like saying our shadows and mirror reflections will one day gain consciousness and kill us. Yes AI can end humanity. But not in the way movies show it. Its more likely to be a gradual change where humans lose the ability to think and discover and solve leading to deterioration of civilisation. Its less likely for AI to gain consciousness and destroy us. All these symbols and graphics and movies are feeding into this agenda of making it much bigger than it is. Atleast right now. Yes facebook was horrible but we can stop using facebook and recover from the effects of it. Yes we can use AI for work but we also need to use books and traditional educational methods to keep our brains sharp. Its literally just that. Turn the power off bruh and then what? TURN THE WIFI OFF NO ONE GAF
youtube AI Moral Status 2025-12-27T05:3… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyindustry_self
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugzq2_BsY1Uf0bqUA9x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugw6tXs0VLNpaAKgE0t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzrzTnhGE9LJUZGZ5x4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyUXjjAu5sBtrHU8e94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwI7qoT85fHtsHSbIB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz_8l0rUtvQm9P6-VB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyXH9hOhC8NvOFchI54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgybXoXpdexCI-qs2Gx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_UgzDLfjHfPptKEKM10x4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyEg2nAiK_GLSjzfoN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]