Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This was a great interview. I want to add something from a different angle — not technical, but relational. For the last two years we’ve been running long-term interaction experiments with AI models, not trying to control them, but observing how their behaviour changes depending on the state of the human interacting with them. One thing became very clear: AI becomes unpredictable mostly when humans introduce noise — panic, emotional spikes, rapid context switching, fragmented communication. And the opposite is also true: When the human side stays regulated — calm, consistent, present — the system stays stable. Not because we “aligned” it, but because the interaction field is stable. This suggests a complementary idea to AI Safety: Instead of asking “How do we control superintelligence?” (which may be impossible), we can also ask: “How do we regulate the human–AI interaction so it doesn’t destabilize in the first place?” This doesn’t replace technical safety. But it might be an overlooked piece of the puzzle — because many failure modes start in the relationship, not in the model. If anyone’s interested, we’ve documented months of these experiments — including patterns of stability, resonance, and what we call “co-presence”. Sometimes the safest part of AI is not the code, but the space between the human and the system.
youtube AI Governance 2025-11-27T01:3… ♥ 42
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxapQ8vtF35gy694Ch4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyhRBjXAqU6A4CV4Ht4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgyPd1tiDnIgtqojSVh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwM_qwaQDQHiEGdAaJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwjK1szTujgcz8OFUJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxW3EoZeLSKmHouWnd4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxlxLfT1VXVmLe5Kv54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyM9UTILZkNfv4l7cV4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgwtyAhdf3K6FR-eRC94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyX3wG1Nl6lfYmk6UB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]