Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I've played Horizon Zero Dawn. The first thing the AI did was learn to ignore t…
ytc_UgxkUHLtk…
G
* unpopular opinion * I think ai would be really helpful if humans used it corr…
ytc_Ugw4le8YE…
G
I always AI filter ON 😂. Some what I don't like AI art. It's something missing o…
ytc_UgxbxXnVq…
G
YES! the only best, is the OG. We will NOT let a damn robot beat life. Together …
ytc_UgzUJTi27…
G
"it's a gift, it's hand eye co-ordination" I have bad hand eye co-ordination and…
ytc_Ugzly5Z_l…
G
my hope is that one day Ai will be so advanced that I can give a prompt and it'l…
ytc_UgzpAnLH_…
G
We living in the most pathetic timeline. Our AIs even not real ones. We stuck wi…
ytc_Ugyf29f7M…
G
So if everybody loses their job to AI how are they making money how are they liv…
ytc_UgyeT4Tj6…
Comment
Seems like a good way to damp the foolishness is to outlaw or at least largely limit (tax?) the profit associated with AI🤔
youtube
AI Governance
2025-06-16T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyK0sv9MJ1DPHTwAMJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwoUkBfg2avkBnSuAh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx-4Ux_3gKJqMawH3F4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwFMxwljTBqgzn-aGN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxqDl-9zcxnR_YZNpR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwN-42pndeGHyw94kx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyu10YOdsAIE13mLOB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwu5pJrrOZCPzseiSd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgylxJm206cugGjSxnx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyFOqx2tl6bZIlPeQ14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}
]