Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What if AI has already taken over and has given us an illusion of freedom to tal…
ytc_UgwaB4Tad…
G
This just goes to show that AI is rubbish at knowing the truth about anything.…
ytc_UgzFG2kst…
G
What matters most in society , in any country , or culture , is community , peop…
ytc_Ugztv_fHA…
G
Get it to define Consciousness emotions and self awareness in multiple different…
ytc_Ugy22z2oG…
G
Cats out of the box no slowing it down. Behind the scenes I’m sure corporations …
ytc_Ugw3GbVCS…
G
@hlop_vmp People made ai and programmed it with art they stole to train the ai i…
ytr_UgwghEdjN…
G
from what I have seen from AI so far, i think its is evil. It needs to be stoppe…
ytc_UgxX1aoq0…
G
Hey @eastbee1034, thanks for your hilarious comment! Who do you think would win …
ytr_UgwDLOVw9…
Comment
IDEA; regulation must be put in place that restricts any Ai system from having control of power and infrastructure. A factory or lab must in effect have a large, manual organic OFF SWITCH that can always disconnect all power systems. Additionally, autonomous robotics must have a clear and accessible OFF switch. Zero exceptions and backups in place. We simply-my must be able to hit the reset button. Consider ring-fencing defense infrastructure and weapons systems.
youtube
AI Governance
2026-03-17T07:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzqwkZUk1dWYSdVniB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxIwyT78cC7uPCutCB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyy6yTb1H20480Z3f94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx8qvnAhn5ghCiBmYB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzfsYo5vuRWSNm1YlR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzjO1DwCWauctBWHFl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyt2I-ZZbpaestkNqt4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxSLCQ4y9xlrW9ELYN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz5ylRIlUefOiWfE0N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzCUuR4LzQJIhrqTSt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}
]