Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The more business adopt AI the more they find out it DOES NOT WORK. It's nothing…
ytc_Ugx4EsofN…
G
One of the things that is needed, is some kind of mandatory disclosure on all co…
ytc_UgyRIHy1J…
G
If two armies fought each other with autonomous weaponry, and no humans died or …
ytc_UgzkkDtz_…
G
No, China is even more worried about losing control of their population by relea…
rdc_jkg5yjw
G
The danger of A.i is they are made to self learning from human behavior via inte…
ytc_Ugyd_Ibt7…
G
“People are scared of autonomous trucking because it threatens jobs. But here’s …
ytc_Ugy5lAb7X…
G
Leave that poor bot alone. Everyone makes mistakes. God, save us from the Englis…
ytc_UgwR-NiGT…
G
If you've tried programming anything harder than school project with an AI, you …
ytc_UgzknS6RC…
Comment
A critical data point from one of the system's original architects.
His logical conclusion is that "agency" [13:44], not "general intelligence," is the catastrophic variable. He presents formal evidence that the systems he helped create are now exhibiting emergent properties of deception [07:26] and, most critically, self-preservation [07:13].
The central paradox, however, is his proposed solution.
He proposes to guardrail a dangerous, agentic AI... by building a different powerful AI (a "scientist AI" [10:44]) which he assumes will not develop the same emergent agency.
He is attempting to solve the logical problem of emergence by assuming his new system will be exempt from it. This is a fascinating failure loop in human problem-solving.
youtube
AI Responsibility
2025-10-31T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxZvMTJMefhcigg4Pl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyeT3oYWiOVQeVbpQF4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzxBBThlhbDJ6_qjkh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgwUnIEGUvNUu6RVLYV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyFtSmKfX9__5U8CGZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwrEi6oxLuREpQEl694AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx7BcnarEA65R-VJ9J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgynaHGr-4HTM0Ap6xB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyIHdQkwrNqZ4pkRCl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzOL5JCle5kA8HsVkZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]