Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Someone who knows the documentation and knows how to use AI as a tool will be fa…
rdc_kz04b3o
G
These people are being targeted for potential crimes not yet committed. Nor are …
ytc_UgzIAIjZz…
G
Alright man, i’m not a frequent commenter on videos so i wasn’t goint to leave a…
ytc_Ugy_Hqsby…
G
What’s your source for how much energy an ai video takes??. Common sense tells …
ytc_UgynxEZcC…
G
AI will. Never. Be. Conscious. It’s a program and hardware that mimics being hum…
ytc_UgyRGW9A-…
G
See that is how you know the Bible is the right way and Only Way, In revelation …
ytc_Ugw9Sx9tJ…
G
I gotta be honest im not a fan of all this ai business. I dont like people becom…
ytc_UgwKSPAfP…
G
Well, they probably use AI to tell them what to think, so don't be too bothered …
ytc_UgzscNZob…
Comment
Don't forget that the AI model can only create meaningful output if there already exists input generated by humans. Once AI displaces human beings, it eventually will displace itself as a technology. The only reason Anthropic can develop AI agents for software development is because it is not illegal for Anthropic to take other humans’ intellectual properties and resell this intellectual property back to humans. Once the hype of AI settles, everyone will understand the truth about this technology. AI will become a great tool to manage vast amounts of data; however, replacing human beings will be shown to be problematic to say the least. We still need humans to generate the input that AI needs to generate its output. Remove humans from the equation, and AI will eventually fail as a technology. See it as a symbiotic eco-system. Humans alongside AI technologies.
youtube
AI Jobs
2026-04-03T01:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwbSvAnZVPob9KFTfJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzb6zv2wCRE03_LsMx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxY8wquW0rR1qWPNgh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyFv4rPK4lHY0YbwXB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyOGng9H0MmnDpbMp54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxz8BgHxqQCjU8vUBN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwP9f7fey83CzZ3ePx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzaRzrrOkaD_fzXKxl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxQ6NrglUZK9WoKL-F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwajlmq7ZBDVUShj7d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]