Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If the robot is that slow and stupid, they should use it to replace management w…
ytc_UgylOONLJ…
G
We’re cooked either way. If AI bubble pops, recession. If AI continues improving…
ytc_UgzUqzaWj…
G
Im like 99% sure the guy (human) in the video is actuaply a.i. as well.
Think ab…
ytc_Ugwn_oH-4…
G
Does anyone else feel bad for the AI?
I understand your obsidian-esque sharpness…
ytc_Ugzm9AXkB…
G
In a world of A.I., humans become less productive and less profitable. People wi…
ytc_UgxDHtAYD…
G
@_-insertname-_ Who said I sell anything? Plenty of ways to use it that don't in…
ytr_UgxCg6NPN…
G
My family uses it for family photos, they've been loving it but they dont share …
ytc_UgwoNh3Oa…
G
AI companies are not generating most tax revenues. Big Tech and Big Business has…
ytc_UgxYiypKA…
Comment
AI isn’t a yes/no decision.
The question is where control actually exists.
Most systems:
→ optimize outputs
→ scale capability
→ improve prediction
But none of that guarantees control at execution.
The boundary is simple:
If a system can act without
independently verifiable conditions at the moment of action,
then it’s operating on inherited assumptions,
not present reality.
That’s where risk enters.
AI should be implemented where:
→ actions require proof at execution
→ conditions are re-established in real time
→ refusal is structurally possible
AI should not be implemented where:
→ outputs can act on stale or assumed validity
→ authority is carried forward without re-verification
→ execution cannot be blocked when proof fails
This isn’t about capability.
It’s about whether the system can say “no” when it matters.
No present-state proof → no execution.
youtube
AI Governance
2026-04-24T14:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwCKkFitvRPyKJS6nR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxzJjiqdpdtgWCVjNR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgylfFqJibsP-ZYskYd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugxg-upCa51egrEFkeB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzuXBO_aWL1FZEFEeF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzSllxQTQwqyJydZWJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgysLZl-6Gk5hQ5dq4V4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwixabSKdmq_ru3X354AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzJuuNC6prztMjSoI54AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz-uAdLLWOhBxh1n-R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]