Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The scary thing is... every comment out there you read anymore...you assume we can tell the difference between AGI and human. But what if we really cant ? Like the only way for humans to survive, is to invent a new language but never share it with AGI in any way. Dont write it on the outside of an envelope, limit its use in how it is shared outside in the world. Can you imagine a world where every community is just trying to survive and outsider comes and needs help...and you absolute can not tell the difference between an artificial human and a real human ? It already knows our secrets and fears and understands how to create paranoia within ourselves. Are we there yet ? Are we there yet ? Are we there ? Right now, we are the 5 year old in the backseat and AI is mom and dad telling us ...no. or we get there when we get there. We would never dream of mom and dad hurting us. But we humans know better but never think ahead. When are we gonna wake up in the backseat, ask to stop the car to go pee....but out of fear we take off ? Kids, its time to stop the car to go peepee .... Just sayin... Peace out, brother/sister!
youtube AI Governance 2024-06-17T08:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzVCn7kBJz3MBkRWut4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgzLKTLxBCADFw4a-vt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwmPzJoSunfx5ih9rF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz7pfjIDrP9c2UykvB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxPfevVaW9ephYafRN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzoXZpBWRKanAetf0h4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx9eiAtO_W4tr0I7CN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzFg5igaDtJAuxt7tF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyAo_WfFZywggHY_qB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwlzJTk5nFsT51AU_54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"} ]