Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
some ai art is developed with great care, patience and time, an ai made movie ca…
ytc_Ugwy3qCpd…
G
no I think WDYM PEOPLE OR ROBOT IT'S A ROBOT YOU BLIND DUMB ASS IF YOU THAT BLIN…
ytc_Ugws8he_m…
G
**************************************
When we say that we live in a simulation,…
ytc_UgzKw_b7x…
G
humans have grown so much and have literally sense the don of time (or whatever …
ytc_Ugy22ANW7…
G
meanwhile google's deepmind stated that LLMs can't be commodified and everything…
ytc_UgwN6kU6z…
G
Speaking of artificial intelligence, I was watching early morning news (5 a m) o…
ytc_Ugzu00gPj…
G
I teach profoundly autistic children. When is AI automating that? I honestly wou…
ytc_UgwEvHvMC…
G
I don't see Glaze as a solution, but as an enabler to extend the reach of AI sto…
ytc_UgxkuzP_U…
Comment
If we were truly intelligent we would have already long respected one another and lived in harmony and balance with or without any machines. Instead we have been exploiting one another, taking advantage, killing for money, for power, for religious believes, and who knows what and even more so with "machines". Perhaps we can use AI to enlighten us and establish harmony, balance and respect, to eliminate greed, corporate and national interests, protect environment and life all around planet Earth, but are we intelligent enough to create SUCH intelligence?
youtube
AI Governance
2025-06-16T11:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzFu5zZpQq8HJutnD54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxMGjY1MJulJ2lXDwB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyzKpcfApfbdR3F7A14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxjy3oAYYF_eOIOJr54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzUzkK8pIjOg0tXrQF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw9HkDvOC7UBhm7BgF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx6Dv6C_2Nb4DSaZfF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyH7yEM1BRCr5BONah4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxqjOFcLo1fM9EkMGx4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxxHwhNF69lilAc_IN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}
]