Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI will not destroy us, we will destroy ourselves. This country has been turned …
ytc_UgwFWp6wl…
G
After The Corridor Crew's video, I suspected this one immediately.
The thing tha…
ytc_Ugx_fGrA0…
G
My bff literally talks with every man on character A.I and she has at least 400*…
ytc_UgyVMPby_…
G
Predictive policing? This doesn't seem constitutional at all... This sheriff isn…
ytc_UgxQBUxUt…
G
AI is getting better, but how its organized and used is a problem. Ego, Competit…
ytc_Ugw5xH8HM…
G
Guys this is great content, informative, but 100% speculative. There are extreme…
ytc_UgyESV92A…
G
But we all benefit from it because almost everything we buy that is so inexpensi…
rdc_gx5lsp0
G
copilot rlly dont want to give u code that contains "secret" or some similar. it…
ytc_UgyMfS_Ty…
Comment
What's left out here is BAD PEOPLE misusing super intelligent AGI. Even if we can control/guide super intelligent AGI to not turn against humans, you can be 100% positive that bad people will use super intelligent AGI to kill. Only takes one release of a bioengineered deadly virus to do this. We have run simulating training scenarios on AI abuse on our Collapse Survivor App. They do not end happily.
youtube
AI Governance
2025-11-30T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwt6DaGWcFenvlbTBp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxyf_KfEQ2SLYok9-d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxNer6d7CZRXqmlaG54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzR2vrJcaG9Ig5JR1B4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyZZ9jHk4bbQfEP0Dx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyuPRpKBn8Os3Fin7N4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzsg8sUUHkAulH3hU14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwLJvQHEMlGkfLb-Ph4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy4HfiN8Djm14kV0pR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy5zIlJQ3h8lFTfr1F4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}
]