Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Fully self driving cars make sense, just not any of the ways we've tried to do i…
ytc_Ugxd_KMn0…
G
AI artists, you mean lazy pricks?
I mean it's no surprise really when you consid…
ytc_UgybJ2vc8…
G
@nionex2796 I'm sorry if my point led you to that thought process, all I mean is…
ytr_UgwzxAdCQ…
G
The little submissive whimper from whoever was told to shut off the ai stuff 😭…
ytc_UgxdtYAgM…
G
Robot AI is becoming so human-like, eventually they'll also start to steal merc…
ytc_UgwAyKexS…
G
Makes me thunk of how some people are actually trying to make comissions off of …
ytc_Ugw_ZTex_…
G
I don’t think AI will capable of replacing humans in all areas. But it will repl…
ytc_UgzyTG_Cj…
G
@GEGGARCHY dude it could just be that the ai was right 🤣 there was an AI that kn…
ytr_UgxnAjZjU…
Comment
Suppose a psychopath like GEORGE Soros or Bill Gates or Anthony Faucci to name a few, takes total control on AI. What would happen to people???? . I think China is a good example of this. In this case is not a psychopath but a communist system subjugating people to the point that these innocents are put on an operating table and slowly their organs are removed in vivo, and so on.
youtube
AI Governance
2025-08-18T14:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyL64usiN99E6JPVS14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwHH0LJZF4N_EWR2TB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyw880POh1kBGFWb_l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxNxdBWtJ6luEcLpyZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw5S1aTr8iJjiw_Tx94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwjqLpKh1Lvyjdkn_F4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgynvUxxQDfM5oept2x4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwWHDx6RvNzXAMYj_d4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyX05EYqasQB3yKqzl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyG2xDiVFgfKBEmDx14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]