Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What is already scary is, For instance, the Canadian government let an "Carney A…
ytc_UgzOugmF6…
G
I feel for this poor kid. I posted a WIP ref of an OC of mine in a group chat I'…
ytc_UgzAiYCHj…
G
I stopped when I learned about the effects of it on environment. I really don’t …
ytc_UgwYesA1U…
G
Anything an AI comes up with is just a rehash of whatever is within its training…
ytc_Ugw_FTXSy…
G
@JesusDihChrist ah! ai is not the only thing that harms the enviorment! social m…
ytr_UgzraStlF…
G
Sigh. I get the impression that that person may have been at fault crossing in t…
ytc_UgxVZQDi2…
G
If a human takes twenty years to learn what an AI can learn in minutes, it's sti…
ytc_UgxrcOIu_…
G
He is misinformation AI ROBOT I GIVE LESS THAN 4years and they will be far more …
ytc_Ugwg4a9Bz…
Comment
Ffs, this is just like when people freaked out about robotics. 'Oh no, robots will take all jobs. aaaaaah!'.
Stop freaking out. Humans will adapt. Ai is NOT like humans. Ai is designed to help. They cannot overcome programming. All the problems with ai are errors between conflicting directives that humans cause. Be scared of humans, NOT AI.
Too many damned movies about skynet are brainwashing the masses.
youtube
AI Governance
2026-03-17T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgxiyqmDpw0692lWdMV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxFmram0tdrPZZkhXR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzHgx5LLEPGx3ulz-x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx3LYWFlFWH96PvjyZ4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"disapproval"},
{"id":"ytc_UgwyoAiQ9L3VIl2NzxJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzfXqIy7IFntO98QPx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyPVAagMVHIt04LYvt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugxdb7d6Klf23WAABVZ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx4TW_rtnN_Hd6z5yR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwpSsiwJJ5MLGWYULh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"disapproval"}]