Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am loving the putting side by side of an AI gullible mind (lesser intelligence…
ytc_UgxLM4a7F…
G
I think if AI were cognizant enough to have opinions on consciousness and the fo…
ytc_UgzJNjRLO…
G
Me: Watches video
Also me who wants to make a replica of SCP-079: Yeah maybe th…
ytc_Ugz3C622e…
G
You act like this is only in the KPOP industry. Literally anyone who is famous h…
ytr_Ugz-OHkE4…
G
Gee ,what was he reporting on. I did nnot sense any coherency. Signal not presen…
ytr_UgyyctZhz…
G
Sincerely, this is overblown. Base models are chaotic mostly because they just t…
ytc_UgzvDis2W…
G
Wonder if the Center for Humane Technology might be keen to weigh in on this. AI…
ytc_UgzE4cJQW…
G
Smart response.
AI can compliment workflows and speed up development. Rejecting …
ytc_Ugw2cyJMe…
Comment
Ezra, your question about an “off switch” for AI hits the core issue: control shouldn’t mean panic or shutdown, it should mean governance.
The more realistic model isn’t a big red button — it’s a role and permissions system, like an operating system.
Humans stay as root users; AIs run as processes with defined privileges.
Instead of hoping we can unplug the machine, we architect clear boundaries: what the model can access, what data it can alter, and who approves escalations.
That design treats AI as an augmentation layer, not a rival mind. It keeps humans as the final context — we decide scope, timing, and moral weight.
In practice, the off switch isn’t a single command; it’s a security structure that scales with trust.
When permissions are transparent and revocable, we get both safety and usability — the same principles that keep any complex system stable.
We’ll provide more on this through time.
youtube
AI Governance
2025-10-17T09:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzmexWnJbzB4UVydcp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugw7OLpNX_TZUxqq59p4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwKEehWUnNlPWy_TWd4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyU9nMB3UAMNASSNJJ4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyP5g2sFlAM953W1SJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx9H0IgRcbmLumw7BZ4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwMPclbDSD7WueoaUd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyS6eg3Ahxh9j_h0xl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugza-ErPaJCR14qaidV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyqpjoKAD_xVT18qRh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]