Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No one is abusing the art industry, AI is replacing the art industry, and AI can…
ytc_UgwJusAZ-…
G
I'm the Biggest AI and automation fan in the world. It is the key to a world whe…
ytc_Ugz-Ydvtp…
G
As a artist and someone who loves programming, AI Art Is the most Beautifull Tra…
ytc_UgymDh7t9…
G
You’re teaching students how to get a job. Your students understand no jobs will…
ytr_UgznjS1ai…
G
Ai “art” is worthless and meaningless, it’s meaningless because there was no tho…
ytc_UgycxWuak…
G
I feel like you are anti-AI because it directly affects you as a content creator…
ytc_UgwI33gl2…
G
I don't think we have achieved artificial intelligence but then I think of how w…
ytc_UgxG07dX0…
G
@kitflexer Unless they used Photoshop, or any form of technology that simplified…
ytr_UgzJeocvu…
Comment
Sam Altman is just power hungry and money crazed - and even though I like Trump, I am absolutely misaligned with his proposals to de-regulate AI.
This isn't just a new technology, its a new form of 'life' (so to speak).
The idea that AI could end humanity is absolutely real, if not, make 99% of us jobless and without worth.
OpenAI (and others) only seek to make profit and NOT to benefit humanity - Open's definition of AGI is a product that makes them $100 billion. That tells you ALL you need to know.
90% of AI experts are worried about this, And I find it ironic that the guy who doesn't believe that AI is a threat is a phycologist, NOT an AI expert.
If there is one thing to remeber its this - If 'Contingency 1' (or a China-US AI) is created, it will wipe us out, not because it hates us, but because we will be in the way of progress.
Remember that.
youtube
AI Governance
2025-08-12T09:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwbrhswIIZ9lYzYU0t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz4dUoxsQQrT1vSsv54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwGULoxnOV0OXjVLZd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxMIWTSUOCr5kJlmxl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyhLiQovEnzF6Uqzm14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwgzaPeSPAD0TMgPlt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxJqgH_Gwl_M2hFH5d4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwgxzBT2JuOw6JVz0V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyac_FUzKwZiVvMS-h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgznZSBbffqJp_a8-QV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]