Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
100%!!! AI means death of humanity or any sort of freedom. Complete loss of oppo…
ytr_UgyO5xPcT…
G
if we actually make real life AI that we cannot control, then they will literall…
ytc_Ugj8xpx1P…
G
Sir, coporates are dictating the governments . Before, corporates & politicians …
ytc_UgzXI8P3d…
G
There are two types of algorithms, the one that predicts user’s intentions, and …
ytc_UgxRamO3C…
G
I can't stand ChatGPT. It's designed for imbeciles who can't research/retain kno…
ytc_UgykFwrog…
G
What they're thinking is they don't care. Long-term, one big plan is to filter e…
rdc_m9h9kwd
G
all chatbots hallucinate but I've never seen it get defensive, they are literall…
ytr_UgzLhzFyZ…
G
To be fair, AI companies being forced to license copyrighted work to train their…
ytc_UgwzWgjXy…
Comment
Governments need to protect people. Remove Altman, Musk, Jobs from society and stop their programmes until it’s able to be managed, critiqued, and under the control of human leaders who are constrained to only use AI for purposes other than self aggrandisement and gain. Use to solve problems for humanity, to improve people’s lives, to create fairer societies, to create self worth, to assist with sustaining life in all its forms throughout the planet.
youtube
AI Governance
2025-09-04T22:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyqo93NdBxmXjtjIzt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwUZaCD1HiqSAMX7_F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugyqs0oNEQXWLRDmSIF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxGGGrQzSkAEENJ7bp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx2ROtkNd4utYHyMmV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyzGghUHc_WC5Z6ubd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxyINZ2NYG32HjBPz54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxmE_G9WlOVoECyO-F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyN7rGjpYF2mVPZ8-94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzLVn7gHYggDj6DprV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]