Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm sure A.I "artist" can't count seven reasons on one hand why entire art commu…
ytc_UgwzP3dWg…
G
If you use ai to write slop for you, you are neither an author or writing a book…
ytc_UgwgEb94R…
G
Could we program it to enjoy doing stuff. Like if you want a robot to cook the p…
ytc_UgzMuF1HA…
G
One of the best books I’ve read lately is Eidos by Felden Vareth. It’s hard scie…
ytc_UgwQ7Fv3Y…
G
Not only racism Linus. It is calling people pedos and antisemitics. Some lawsuit…
ytc_UgylkIzGl…
G
AI can only learn from the art humans have spent millions of hours and dollars o…
ytc_UgwRiU__r…
G
The AI is keeping an account of "positive results to its self". If it equates de…
ytc_Ugyzc_9u_…
G
Yes. If AI can, "do your work", chances are your boss will chose AI rather than …
ytc_UgyUt9qHG…
Comment
How can AI be made safe? If humans will become superfluous to the production of goods and services, the advancement of art and science then what does safety even mean? Safe from euthanasia? The only safety is to ban AI outright in all but the most menial of tasks and enact severe penalties against those working to expand its capabilities.
youtube
AI Governance
2025-09-20T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxJcGvLIchGJ21tKhB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy0RivQDWdobXlRX8N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwJY29I2Q-Brv7ZkrZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzWtTNOrUn-770HlQ94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx7VR3JooxZQ8WwzI14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxNQEf7Vu496Zs6m7t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwGysgvofIln1H8uTx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxnURh1G3_ICMMqvTp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugygh_mcS1CgUz33Eoh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzFbBqeY5vVkASsL_x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]