Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sam Altman's actions can viewed as possibly coming from two very different world…
rdc_jkg0c65
G
Alex is a smart guy, but this meaningless waste of time shows that he obviously …
ytc_UgxBEjhtD…
G
If Sam really believed that why does he not promote decentralised and secure AI …
ytc_UgxJMM2r7…
G
AI has very flawed linguistics. It's built on a dataset of whatever it can find …
ytr_UgxG76Ti_…
G
What I wonder is, if we are asked to check a box to say some of our content is A…
ytc_UgyoD89Yi…
G
You can make people think you’re using AI to type all your messages in servers s…
ytc_Ugy7pHw0u…
G
Autonomous IA is frightening. Even if it never leads to a large scale war, I fin…
ytc_UgyyYfDnF…
G
The Godfather of AI CAN DESTROY IT, IT WILL BECOME VERY DANGEROUS FOR OUR NATION…
ytc_Ugw6ZEUto…
Comment
I’m not so much worried about AI as I am about the fascism in the US. AI, I feel, is inevitable and we should learn to live with it and find ways to offset its displacement of jobs in society. But we can’t even seem to get half the country to agree that people should have civil rights, kids should have food, and foreigners should be embraced. So it seems to be the lesser of all the evils imo.
youtube
AI Jobs
2025-10-08T15:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz_gIi8meBph4ZJS6l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzqgoHDqX62IHGitjF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx2Do4gBqf9HenRbUJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwIylNg8taBEeu-6TJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzURYXIHkQY4kcMRJ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgytMo3k4jqPMCXNBhl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxwaoD1ZWSXCM9QKkB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz9rXqWbVwqeaymLqh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzAY42BiFcrlthNiAR4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugx63W6C8rOHWLynDj14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]