Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
MangataEdelweissyou're a clown buddy, there's absolutely nothing wrong with usi…
ytr_UgwqtzFnp…
G
In my opinion humans will eventually create artificial intelligence,because of …
ytc_Ugh5V3YOV…
G
AI is currently being used by Social Media platforms to censor Free Speech and f…
ytc_UgzAz981Y…
G
I'm going to ask chatgpt whether I should breath oxygen. If it says "no" I'll po…
ytc_UgzxZGgs6…
G
Also AI art is flooding websites like DeviantArt and drawing attention away from…
ytc_UgyLD66A4…
G
I don’t agree with this argument since it’s like saying you have to give consent…
ytc_Ugyw51455…
G
@Apr0x1m0 What do you mean it has been proven? No, it hasn't. That is the CLAI…
ytr_Ugw5uiOjn…
G
@xjdkcc What? By leeching of actual artists? You do realise that if ai learns f…
ytr_Ugxydl1x9…
Comment
Maybe we should ask another question.
What would the corporation do if their product will act "conscious"? Act like in all other cases, when product act NOT like it is advertised or intended to do - try to act like it was a "minor bug", relaese a patch for it and continue selling it? Maybe pay/ask some tech-content creators to make a cover-up story? And payment will go through the advertisment of a specific programm, that will gather information about all the activities of the user under the guise of a "secure and private connection"?
Later, that activity will be used to futher train AI`s to predict and (in perspective) to control the behavior of people.
Have you heard the recent reddit scandal, where some university used AI to act as human users in order to persuade reddit users to change their views?
youtube
AI Moral Status
2025-07-09T19:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwx2Pm6TGUHZSdZ0IV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzeBicFs6vyKaWl8xt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxpfqaHN6iD5TSw0HR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwO-ME2IxthoL3ykqF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgypCoR8t1-AxkY_4Mp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx0j7dRb-pcTJOQ5Vh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyRLvmI_j7AZkWW3E14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxJHMTTnlVZPVwV8tx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwbxRfJEyHabuwcqLt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzoHETvsGwt1LpIssB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"industry_self","emotion":"indifference"}
]