Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Oh ok, give all responses in Manderan, then Navaho, then Farsi, its not even a g…
ytc_Ugyl0q-VQ…
G
I struggle with people believing this Ai super intelligence crap.
It's All train…
ytc_Ugz0qe0ao…
G
When will you not need yourself to create these videos (means Total ai) or alrea…
ytc_UgwELex2O…
G
There will be wars like you have never seen before AI will be destroyed and supe…
ytc_UgzQy4yTf…
G
I will say using AI when I was getting my masters in biology was very helpful. N…
ytc_Ugw8yyjlC…
G
Ill admit the ai image looks cool would make a good pfp or desktop background if…
ytc_Ugy7WhDnr…
G
what if the real risk isn’t runaway AI? what if the real risk is human complacen…
ytc_Ugw7F8Lrq…
G
How far is AI from determining how incompetent we are at nearly everything–namel…
ytc_UgwD6hQIG…
Comment
I know that this maybe a left field thing to say, but I must. We have seen the movies about this very thing. Someone postulated this threat when the first Terminator movie came out, then I-robot, even Star Trek. However, here we are on the brink of negating humanity. Why? because we can. Inconceivable!
youtube
AI Governance
2025-06-20T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxG_bW-Ga44iH1nQdx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyiwpxK51Qn7h6meiR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwDLvinVNtQGrtSEfF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyGJc4MatqIAwsv90t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxOOPFK7kjqEn4I_-R4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxEx5cGAbGxaSINSht4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx6f9LsQrJTj4sR9694AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzReP_rguS7WN-Mfb54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzQ6jysPESiymlsXBd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgystHbHW3zDK-ZPitZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}
]