Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"AI safety" means training AI to not blurt out that the emperor has no clothes…
ytc_UgzseXzrx…
G
Bold talk when OpenAI is in a project called stargate with a guy who literally w…
rdc_m9hfwvu
G
There should be a law totally outlawing driverless semi's. Would one of these CE…
ytc_UgxMf--qF…
G
I'm pro AI.
Everyone should directly benefit from its advancement, and especial…
ytc_UgzvBJHrK…
G
But US private contractors get to rebuild the country, paid for by the Belize pe…
rdc_dsbb63t
G
People getting mad at Deepseek for just taking ChatGPT as a basis without permis…
rdc_m9hwsiz
G
All of this AI inclusion into all ranges of humanity will be the end of humanity…
ytc_UgzXsMuq-…
G
Hey they may be putting AI in all the communication networks, but at least *my* …
rdc_o78r8n4
Comment
I use AI several times a day for a multitude of tasks. It requires caution, but asking additional questions usually clears the bugs. It is great for identifying propaganda and deliberately false information. I would prefer it was a little bolder at times. It is not safe to rely on it entirely, but it can identify where to look for further information. It shows Google to be a clunky dysfunctional mess.
youtube
AI Governance
2025-12-30T11:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | mixed |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwS7ZaYErYtb2o844B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzed0s3l6lWq0KyhI94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwuYTzyHYJzGyljuPd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxmTOCRb1tlaDWFxNR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzjEfw4jMnyn28cevN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwnkJwzlRZz_Wiz1SZ4AaABAg","responsibility":"user","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxRbbrybpv42njlQqd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwlJtsxGhikR0n9xDN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"resignation"},
{"id":"ytc_UgwdFbKxnavTBJazV6R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyTpFe0HyBispJeo654AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]