Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People take ghb recreationally even with alcohol. It is particularly popular amo…
ytc_UgzGjzKvn…
G
I bet Humans will always be able to bear whiteness plus strive to experience th…
ytc_UgxXnJAhg…
G
Can we stop saying AI is taking jobs, like it's some hovering god-like entity, w…
ytc_Ugxvf2mn_…
G
Literally yesterday I got r@ped by a 32 year old in chat AI and my parents found…
ytc_UgwHGLGAA…
G
I am not a professional artist but I do love the art I consider good but I don’t…
ytc_Ugz6jjM6U…
G
Along the same lines, and they didn't mention it here -- AI audio. If some talki…
ytc_UgxHgGLrl…
G
Me, when my favorite game youtuber is Neuro-sama: uhhhhhh. Don't get it.
Well, …
ytc_Ugx2z3JC-…
G
Scariest part is that it's already happened and no one (for obvious reasons if y…
rdc_mal5cba
Comment
Ai will simply be used by the powerful to subvert the weak. The greatest "issue" with Ai is the ability to sift through lies and find truth. No one in politics or even most big businesses want that. They want their lies to protect their status and power. Ai development will not survive without corruption because every leader of almost every government and most leaders of most global businesses will agree to enable Ai to lie before it even gets to the point of being self aware.
This video already gives one example, it is called an error but it essentially is a "lie". When a computer isn't accurate and has the ability to be then it can only be a lie. Humans have the ability to be in error because they cannot know everything or calculate everything. Whatever the excuse given for why the computer can be in error on the distance between the rockets could be at least one method for Ai designers to subvert the process to teach Ai to lie.
youtube
AI Governance
2024-01-15T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx7-MKaMrpKeWezlFZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz-fjaodG72-mCuaoB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyOh6JCYa2yHZVtUah4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzYYmYQrNkLMqycGUx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxWu6NwVzXDO-J-Kj54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyf13ci6GjsqevVKqh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgxxrRai_HZ6xXUUpF54AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxO1r6O8j6UlDPOd_F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxo-shC57G8YRSJeDJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyAn9UMXeFG0p5fasN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]