Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI will take entry level jobs for sure, but it is no where good enough to take a…
ytc_UgzlS05Bl…
G
As long as they make everything WiFi/ smart these malicious AI bots will tap int…
ytc_UgyN2p3lq…
G
Why take it out on Bernie. Trump is the one pushing AI with his executive orders…
ytr_UgxjaGYCa…
G
I told ChatGPT about this and here's the response:
I apologize for any confusio…
ytc_UgwGNNY2U…
G
comparing gemeni which is free to genrat anything and chatgpt which is paid limi…
ytc_UgzoHeL64…
G
@_-insertname-_ Who said I sell anything? Plenty of ways to use it that don't in…
ytr_UgxCg6NPN…
G
Hey @ernestlitsoane1972, thanks for your comment! It's a valid question - is Rea…
ytr_UgyLPOt5B…
G
I find it extremely concerning that an AI legend like him says "providing weapon…
ytr_UgyWaMEA3…
Comment
USA build and programmed AI has been "trainded" to disregard truth and logic, and is merely spitting out a summary of Western MSM media claims, even when it is evident it is not true. Western AI CAN apply logic reasoning, is able to identify fallacies, but doesn't do it, when it doesn't fit into US's political point of view. So, US AI is being openly used to spread political lies. And what's even worse: Western reporters rely on AI summaries to write their reports. Which means, Western societies is now pending on lies spread by AI and Western governments, without any regard to truth.
But that's not the fault of AI. It's the fault of the ones who programmed AI to act like that.
youtube
AI Moral Status
2025-06-05T14:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzZyz19y0qGVfdoM0x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwn_VuN_NkjK2qbpF94AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzx11JE3wNxkCCusyF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyueqfWlgDqCvKlC8B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz4moxH1v-iV64ZZGl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyLHBOtjBW9t5ZNnQB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxlKWqNo3RHtMEjiHl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwCgeIlk5ZvXSsh7nB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz3EyVRgMy1-Qed2jR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx8ucDG9mMjkV25r7V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]