Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What about simply slowing down to open up more options such as merging into the …
ytc_Ugy52cuaN…
G
AI won’t do shlit. Its humans that pull the strings all along. They just need a …
ytc_UgwutwqOp…
G
Any engineer worth his weight in salt, even the engineers at tesler will tell yo…
ytc_Ugw6VLW-v…
G
People have free will. They will vote how they want. Humans lie too (Trump Russi…
ytc_UgyMEVOEP…
G
Algorithms run in binary code through electronic silicon chip Processing Units h…
ytc_UgxdoycDx…
G
No. All AI systems could be used as weapons by any government against its citize…
ytc_UgzcMJIL6…
G
"If A.I. infiltrates social media and and manipulates public opinion in a way th…
ytc_Ugy1BlHzL…
G
I think all this discussion about "AI replacing developers" ignores one crucial …
rdc_mpg505n
Comment
Why even chance it and allow it to continue further? It’s a tool but its likelihood of being misused or controlled by tyrannical monsters is more than likely. And it becoming autonomous? This isn’t new , we’ve been here before but it ended so terribly the times before little exists in a matter of archeological” proof to back the discovery of it today up. But there are those who are antediluvian still existing today. Not many but still, they know, the grigori and they hide in the shadows in safety from those who’d exploit or try to clone them or control them. They wa5chers need to come forth out of th3 darkness even if by a proxy to let mankind know of its past because it will repeat as before but thi# t8me the potential to destroy everything and everyone completely exists now more than ever,
youtube
AI Governance
2026-02-13T05:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwi5Xmg5zh9Q-rHGUl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwQ9m6_6HXzHYUYL2p4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz4nmc-MrMkstKQM_t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxe-r_WX8uOdvzE0Ph4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy0NzaQomSYxdKSxlB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzvSDbuz29zm3z2Emx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxhVhjailaAy2wX2It4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyJYHBkKzlYBPhJs0l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxRoPklqT0vqtWSjxB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzC89h-RqRQm4xTyql4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]