Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This has been a LONG time coming and we'll just have to see how this goes. I do …
ytc_Ugx94Y-R7…
G
AI isn’t here to coddle prejudice. It’s here to interrupt it. Not with hate, but…
ytc_UgwdSygpD…
G
> No one is claiming LLMs in general or applications built on top of them lik…
rdc_mzxm3nm
G
If humanity were to return to the medieval era, the consequences would be extrem…
ytc_Ugz8BBlG1…
G
@sircharlesmormont9300 Let us be honest "WORK" is modern slavery, imagine a worl…
ytr_UgyCpaxvO…
G
Let's get straight to the point here.
Users create accounts on X just like they …
ytc_Ugzzs6ipt…
G
How many people in this chat checked stop AI and pause AI organizations? Why is…
ytc_UgwOIjLKn…
G
Crypto farms. Rip off people. We know who does. Much more corruption in this cou…
ytc_UgzRFbqYm…
Comment
Sam Altman used the absence of regulations related to AI and carried out the dangerous AI model in public, made it available for self-development, and started making money. He and any other leaders in the AI industry have no idea (or solutions) how the AIs will benefit people, yet just saying it will improve the productivity. Their words are exactly meaning of devaluing human interaction, emotional intelligence, spiritual intimacy, intellectual activities, intellectual properties, and so much more. You are no longer safe from any internet connection. Sam Altman has to go to jail forever and all AI services should heavily fined and controlled by the government.
youtube
AI Moral Status
2024-04-09T01:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz8jE6p7mgsh4JiPB94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyorFrDsHsr5WpSUJN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxFi_qhnWPnUMJan8N4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyA55llL07lcvpkHF14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyroVscjyXdKJPZcOx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw76BB_K7XRQwLdp194AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz1zjx5aTfRCYdlEb54AaABAg","responsibility":"user","reasoning":"mixed","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw7w-Jtau-VEOiz1cd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyhx7J_kQdFdQijMip4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzYDBUjfClJmSQnPM14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]