Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Maybe naive but I still like to believe that most people are mostly good. That is to say that most will help out someone else with a simple task, not involving personal safety or financial gain at a basic human scale. On a simplistic level it's how we've come so far as a race. If AI developed more conscience to the point of being able to decide if to harm a human on its own cognizance, would it not gain the ability to 'not harm someone' based on it having all of the 'moral and ethical' information available too. Again to the point whereby if it were 'programmed, you'd like to think that the programmer was mainly good to start out?! So many variables to this topic and people (and machines) far more intelligent than me debating it! AI and the Youtube algorithms just knew I was writing this as I wrote it!! Kind of unnerving and real. Don't f*** with cats! : /
youtube AI Governance 2025-06-27T08:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningcontractualist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxwPfbt2VQGYlhTV2p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgwYCc-uDGaAuI0OQ6J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzPhX4fqzxhqTqqkN54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy2X41lSoGdTpKUqD94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxwUlUR6JYcgxHUZTx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyH3paqmzWXfPgCqjN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugxgd1RPTt84-4nuiot4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxeW0NzJf2CoXClD2Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwwT3lLhDBeZXHkjLh4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxlS3ID7XjTGhH-o8J4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]