Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
On va faire quoi , on va vivre avec quoi et comment on va gagner notre vie ? C’e…
ytc_Ugw6Xp96s…
G
Waymo is still a better service, we'll see where they are in 6 months to a year…
ytc_UgwNBN4N2…
G
The takes I break out on...
1. AI slop: many people want to profit from it, but…
ytc_Ugy-SCBZe…
G
This is one of the reasons I am against AI development. Luckily it has been slow…
ytc_UgxPMwqLp…
G
I believe something is wrong with our current approach to AI. The human brain ca…
ytc_UgyqGjPNw…
G
And I’m just sitting him, using my stimulus check to pay for next months rent, m…
rdc_fn5pqkt
G
@Iman_savageai: Do you think we can trust AI more than humans? Sounds a little s…
ytc_Ugw3UI5zr…
G
I like how people are scared of losing their jobs to AI, and, at the same time, …
ytc_UgwMNWHoi…
Comment
The feedback loop for misinformation you describe is worrying, but perhaps this can be addressed by the programming and training algorithms. It should be possible to integrate proper fact checking in LLMs. Keep in mind that these are very early days for AI. As for the slop, this could be said to be due to bad prompting, i.e. it is ultimately a human problem, not a tech problem.
youtube
2026-01-24T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzeridWgn0USXt9EnJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw8vzYQnZLb-hv21UJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzG_CniqQDwt2ff_VB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgywIrNaL_jrzXUZnjh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwKjug4Y0nqZluV4e94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyBg4klz0iKo75ewml4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzB85FS_mAe2Nl1DXJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyV7fllnqm9AholJaJ4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxDmzYGVe9MpYHjwMd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyo9WJ6cwGtKaeBJE54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]