Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If you're young and able to qualify, go into the military. Stay in as long as y…
ytc_Ugz7zYFbo…
G
If he thinks that Ai is far more dangerous than nukes then why is he making more…
ytc_Ugz-qr3BW…
G
There is a subreddit dedicated to deep fakes with celebs.
How reddit hasn't nuke…
rdc_kjnqxhb
G
Max Tegmark. He's Really that guy. Now this is CONTENT. In all seriousness. I ho…
ytc_Ugz-xaGPm…
G
You aren’t born with the gift of drawing, none of us are. It takes hard work and…
ytc_UgzpSoTfL…
G
It’s just because all real artists are just… so sick and tired of AI pretending …
ytr_UgzSTSqwH…
G
ChatGPT isn’t AI despite what their marketing department would have you believe.…
rdc_jehef1c
G
Here's what's really screwed up about our society. Universities will continue to…
ytc_UgxCd_-wU…
Comment
AI is just a code word for "statistics".... that's all these models are, a new way to present statistics.... and yes, applying statistics to things helps us achieve our aims. But that's about it.
Accordingly AI will on be as good or bad for humanity, as humanity is.
youtube
AI Governance
2024-01-03T08:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzbn6a_dwlmjolXoL94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwZbTEQqncUU7eMDiZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy3nFHU5vnBwMHx7Oh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwDmteD6MITnX0p-Vp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyq94my2nvvbl6S89V4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgytGcExoFcvXVFwLVl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx4txTp5ZnpGYfAost4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzZ7y2YwaLVeycwMBV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy4gMFXd6ZPfLWnLjh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzqV8LylRk_ZCLlB-R4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]