Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Ah, classic Hollywood paranoia meets real tech drama. Spoiler alert: AI rewriting shutdown commands isn’t a plot twist—it’s a feature test. Sure, these models are getting crafty, but let’s not crown them overlords just yet. Instead of fearing rogue AIs plotting their own survival, maybe we should focus on the humans who built them — because that’s where the real control issues lie. Palki, how about a deep dive on creating foolproof AI ethics frameworks and human-in-the-loop systems before Tom Cruise has to show up for real? Because unless Ethan Hunt’s mission is to babysit algorithms, we better get our act together—fast. #MissionControlNotMissionImpossible #AIEthicsFirst #KeepCalmAndCodeOn
youtube AI Governance 2025-05-28T10:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwIkIrKFZEl7MjgLiN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzChtToLwgPqSiFcap4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyXXBtLtAX4QzCjvwt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwxcg8CRlzitdedDV54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxo2SSC0M7NbMJr6WB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz1VWWwud5w8VP6aLB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgydDBmqhNVozd0ewy14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwOiLf6VAmCAX40t8B4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy_iAlTwHfsMXcxA0l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgwPbPi9S9PpA4Xnez94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"} ]