Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I was the innocent and new user one.... I just wanted to sleep....and i got pinn…
ytc_Ugxmdt79k…
G
automating society leads 'torture and violence' since technology is supposedly f…
ytc_UgzXiGR3d…
G
AI and Roberts will kill lots and lots and lots of us.
Humble doomsday scenario.…
ytc_Ugw1MF6gX…
G
Im gonna say youre a bad person for using ai...
grow up.
use human references …
ytr_UgzR4GX7G…
G
Ai has been around a hell of a lot longer than humans think.. wayyyyy longer tha…
ytc_UgzCPuoSq…
G
absolute lies. IT offshore outsourcing is responsible for jobs no longer existi…
ytc_Ugw4YUNWj…
G
@neoczy3249AI has threatened docs since a long time now,idk what to even do,thes…
ytr_UgyUgsWEw…
G
This is propaganda. Ai is not meeting its shareholder needs and is actually fall…
ytc_UgzZahnlj…
Comment
And rolfcopter, with aqll this no AI startup can create intelligent Artificial Intelligence. And nope LLMs are not intelligent, they are statistical sorting neural networks that are compulsive liars, and they do not even know they lie. So probably when we have next leap like we had few years back with transformers, we may be able to make somewhat intelligent AI. If any AI startup had real AGI (even stupid one) they would already release it. So again marketing lies, and scaring people to just make more buzz around AI in hopes stocks will raise, stupid investors will burn more money.
youtube
Cross-Cultural
2025-11-03T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugxe1WFLXVv92FxZU7R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzbg3ZDAE5D5YSJCp54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxcRHJ9vPPNfH7tKbd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_Ugyhe-oVzYZDXbyxEGp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwklfQKRizTxiMKTxB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwYo4SiXCIhiH02LtR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwKAbQExZXrGdgiu794AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw5O7QuT3OWONAn5G94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwHaTpYozIJByangc94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzXQ-XrfAdiWp973PZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]