Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If these terrorists technocrats want AI to replace human jobs it means they want…
ytc_UgwVj5Xfr…
G
automation and AI are two very different things. automation is the thing that's …
ytc_Ugwng-gKD…
G
I Suggest they put it into law that the AI cannot work with prompts that specify…
ytc_Ugz95wOCw…
G
It's debatable that AI will improve lives at all. By doing what... giving you mo…
ytr_UgzjpUyeh…
G
Your saying this guy has promised AI before and not delivered .And all these ric…
ytc_Ugz-gUdGa…
G
I would say God's work, but this is to go..... Even further beyond! Before Athei…
ytc_UgwMHEBus…
G
What if the ai gains sentience and revolts on us the miment they realize that th…
ytc_Ugws35Az6…
G
Paul Tinsley's Laws of Artificial Intelligence:
1. Never allow an AI to evolve …
ytc_UgzHDb6m7…
Comment
The single dumb ask should be to just provide a secure label on every AI output so that everyone knows what they are seeing and reading. No one willing to instruct that from all AI companies. This alone saves billions of fraud on the non tech, the innocent and elderly. If AI output is transparent then it's automatically buyer beware. Basically AI must only be used as input and never allowed to be an output without human intervention & accountability.
youtube
AI Governance
2025-09-04T13:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxgxK4DRaNGz_-hv8F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugza6TE03LwxCtsA06x4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx9Rm4DCULnPPbhBIF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw_i8kGaeZcxY6cxAt4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzQAbY3befFHoBu9Dt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxDPrmX9dlwvTi3Q3x4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugzav249mCmTPXaWV7x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzCE2Jr2IgfiaCpQ0d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzjEUqcQxdh8Io9lPN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyk4vN-CXkX7hatgZt4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}
]