Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Billions of humans with no redeeming purpose, will destroy themselves. Kind of w…
ytc_UgxB5DUNV…
G
I use AI everyday at my dev job, and frankly, it’s nowhere nearing replacing an …
rdc_mt8utda
G
If they make decisions based on logic and avoiding negative consequences, maybe …
ytc_UgwW5VtUm…
G
What if an AI decides it would be good to geno**de Gaza for Isr**l?
Wait, we do…
ytc_UgxsNxxha…
G
19:50 I disagree that you must be as smart as super intelligence to predict what…
ytc_Ugy5EqkZQ…
G
All I'm hearing is "We are so bad at our jobs, base-level AI is nearly indisting…
ytc_Ugwa2mhFK…
G
That is another concern I have with AI. It goes far beyond art. Right now we can…
ytr_UgyVpfbCO…
G
We live in a three-dimensional world, and all of these arguments in favor of ai …
ytc_UgzeiAx7S…
Comment
They will give UBI and jobs to people, until the machines reach the point where what people want has no power or voice to institute, that is, humans will have jobs and say until it reaches a point where people have no say or recourse because the scale difference of intelligence and physical embodiment over powers all humans. I give it 20 years unless we set limits, and ultimately design a tractable/ accountable system. I have designed auditable LLMs but they are toys and I have no funds to bring them to market, and there are scale issues to overcome. It is not a principle issue it is an engineering issue.
youtube
AI Governance
2025-09-04T11:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwQ1roC2TVImh3Nt-54AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwVu1gLqtdRRhNh9yN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy2JsqC_BKw36VBvWN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxcAxh6IiFRQ2yOEAt4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz_6zOAvN11xn36itl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxCXl7qYoq7gfQfUmF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwP1Tj9jtq1s_kBq6J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgysVjtJxZRek8J6IWh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx-6aZR3bulb9kvUUt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgylzdhgzAOHG46m6hJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]