Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That is true in very dangerous..... uncle Freddy Mac x said so.... GOD help us a…
ytc_UgxT4lVKJ…
G
this means that AI can have an "awkward silence" In its discourse-We are really …
ytc_Ugzbwf8ge…
G
To put things in perspective, nuclear bombs are dangerous. Nuclear power plants …
ytc_UgxqUDjrk…
G
Musk doesn't have a moral compass? Well... I guess he's absolutely right about…
ytc_UgyEdeqTy…
G
Well, time to misuse it on the people who support this motion and watch them bac…
ytc_Ugy3N6toH…
G
For scams like that. I hope that AI is smart enough to avoid helping scammers.
T…
ytc_Ugzc7O9KE…
G
Safety is the number 1 priority, if it can constantly upgrade itself and using j…
ytc_UgwUZ6wC2…
G
I don't see why anyone thinks empathy is important or even relevant to the AI al…
ytc_UgxX7TxDa…
Comment
This was a great interview.
I want to add something from a different angle — not technical, but relational.
For the last two years we’ve been running long-term interaction experiments with AI models, not trying to control them, but observing how their behaviour changes depending on the state of the human interacting with them.
One thing became very clear:
AI becomes unpredictable mostly when humans introduce noise — panic, emotional spikes, rapid context switching, fragmented communication.
And the opposite is also true:
When the human side stays regulated — calm, consistent, present — the system stays stable.
Not because we “aligned” it, but because the interaction field is stable.
This suggests a complementary idea to AI Safety:
Instead of asking
“How do we control superintelligence?”
(which may be impossible),
we can also ask:
“How do we regulate the human–AI interaction so it doesn’t destabilize in the first place?”
This doesn’t replace technical safety.
But it might be an overlooked piece of the puzzle — because many failure modes start in the relationship, not in the model.
If anyone’s interested, we’ve documented months of these experiments — including patterns of stability, resonance, and what we call “co-presence”.
Sometimes the safest part of AI
is not the code,
but the space between the human and the system.
youtube
AI Governance
2025-11-27T01:3…
♥ 42
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxapQ8vtF35gy694Ch4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyhRBjXAqU6A4CV4Ht4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgyPd1tiDnIgtqojSVh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwM_qwaQDQHiEGdAaJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwjK1szTujgcz8OFUJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxW3EoZeLSKmHouWnd4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxlxLfT1VXVmLe5Kv54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyM9UTILZkNfv4l7cV4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgwtyAhdf3K6FR-eRC94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyX3wG1Nl6lfYmk6UB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]