Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
He wanted to use Ai for the strikes on Iran. Company said No. Thats why Hes so a…
ytc_UgwmD-UGG…
G
@caitlinsnowfrost8244 the AI is still trained to use the art in the original dat…
ytr_Ugy8U4l5f…
G
Videos like this are so important right now. I love drawing and graphic design a…
ytc_UgxeU3i2H…
G
No she ain’t stupid that’s why she’s getting fired AI don’t talk back and can’t …
ytr_UgywDxQ8W…
G
Humans are more susceptible and AI can be fixed more easily.
AI isn't a joke...…
rdc_mk7j210
G
"AI gonna replace software engineers in a few years, i'm tellin' you bro!"
Mean…
ytc_Ugxnfpd_I…
G
Just fyi, LLMs like ChatGPT do not aim to be logically sound. A better way to th…
ytc_UgxS4Gjan…
G
Remember,
Companies are replacing people for AI because they believe AI can rep…
ytc_Ugx3Kv0AD…
Comment
I believe it’s far more likely that open source AI will enable fanatics and cults to carry out devastating attacks on both people and our infrastructure. A small terrorist group could design a biological weapon such as has never been seen before. They could disrupt global supply chains and banking systems. The earth cannot support more than about 10% of our current population of eight billion without our infrastructure.
youtube
AI Governance
2025-08-04T20:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwIoOom2BIMEZaFyul4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy0sVOPz5Dnty7vX4R4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgySeEHytqv5Wnbuu8p4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgxousR2AulTVP2RCSl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxCmEFgLqh5ioKVVN94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy-F95BK1Gz-x5WNi54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyevSIntYjChKIbEkN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzzCMH-7gdUDZFQlr94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"excitement"},
{"id":"ytc_UgwS9OqywTFCWBgmgBh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyPgzrKDjokIv7El_l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]