Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
7:19
This guy couldn't even be bothered to write this themself.
This is so clea…
ytc_Ugxkv1FCk…
G
She made a song and people dont know this song belongs to human but it belongs t…
ytc_UgzQkbaT2…
G
Honestly as a disabled artist, co-opting disabilities to defend AI is so incredi…
ytc_UgyA5ZgFs…
G
Romeo Izdead glad you made money on it, now Id take your money and run because t…
ytr_Ugx1QDhUt…
G
We humans are our own worst enemies, people have become lazy and entitled, they …
ytc_UgxDxVq8c…
G
Just heard about these AI 'programs' - and it's 700 >Indian dudes in a sweaty of…
ytc_UgwYMbnDM…
G
Relying on ChatGPT to do your thinking for you can lead to trouble. It's importa…
ytc_UgzVHOXvI…
G
If this is not some AI made shit, then is so disturbing and dangerous on many le…
ytc_Ugx__l1bt…
Comment
It depends on who is feeding the information to the intelligence and what information is being fed to them… then making sure the AI intelligence is told not to add or subtract from that information… it would have to be very exact....precise..controlled..like telling a dog to sit, stay, run, stop barking.. less is more in this situation.. ❤ Then testing the AI robot over & over again to make sure that it acts out the script it’s been given @ 100 percent..this is where safety comes into play..We ALL have the same (GOD) supreme being, higher power.. we are all just looking at it from our OWN point of View.
youtube
AI Governance
2026-02-22T09:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgyZqnH2B4DagoHSLyh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyQQMMQqpQxje1DgDN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzmkuoUt6yWlWj5omR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwDfIdU1ksrD0l-mDV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxzdQ54ShwF17eA-l14AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw9K-ElqPRQnp3QnBJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwupjOA_qUQBZ6Nuy54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxsg6CR0akqLtiUjjJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxuXo9dEnGI4PLqjhR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwYPPm8Hxjgg5e6G_p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]