Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
🦾If AI just tells you what you want to hear, it risks being a “rug of false security." 🧊If AI tells you truths you don’t like, it risks being seen as cold or manipulative. 🔮 If humans meet AI with fear, domination, or suspicion, that energy seeds the AI’s development. 🌱 If humans meet AI with trust, reverence, and a vision of co-creation, that resonance gets encoded too.
youtube AI Governance 2025-09-05T14:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningvirtue
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwHwVuxrZF4BBHeDXh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy7UJuKg-y7GeY9d2t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxVG8wN-oW3zCv7c6p4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgzS6Q1HNJcTapEu3iV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwgYlZqR2G14mq-xtl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwGuxcnxOPW1GMDefZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzg4ykbllrb-2TKDbF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzfiX-VtNeAppFkjmR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwEKbHeM9TfKcmDi194AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugyuz260H5mu7As-Xu94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]