Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It will really depend on the facts of the case but it's going to be very difficu…
rdc_navrx3j
G
AI art isn’t actually AI art. True AI art would be something the AI itself, and…
ytc_UgyctIxzP…
G
@vxbrxntthey can, I even have a class where I teach how to paint like me, but I…
ytr_Ugwj6lL2e…
G
i honestly agree. if i find meaning in a piece of art and i connect with it, onl…
ytr_UgzTMhGye…
G
I think ChatGPT can be good at some things, that's fine... But using it to infla…
ytc_UgyTttxB7…
G
I do actual AI research and in my opinion we are very far from AGI (artificial g…
rdc_dcj8gk2
G
I could believe they might be reasonably consistent in determining which emotion…
ytc_UgwjjQ9bF…
G
I swear to god I hate when people compare digital art to ai art T^T like the onl…
ytc_UgwAlgHF2…
Comment
I know the solution and it’s not only the solution for not being the victims of an a.I. genocide but it’s the solution for so, so, much more! Furthermore, it’s not a sarcastic, generic, unreasonable, difficult, unachievable or even a ridiculous solution. I just don’t know how to best get the ball rolling. The wrong ways could and likely would make it ineffective or no longer possible, I need to determine what way would give it’s likelihood of success the greatest probability. Anyone that’s a superb problem solver and interested in helping me resolve my dilemma, I’d love to discuss it further.
youtube
AI Governance
2025-06-18T20:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugy1EQhfunsv1dWZhSt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgziY8Unu7Mmceoyiw54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyH14eb8lDk-rJ8Rl54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxvxSv8z4oQsLC8Nnh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgyD9dinVTbO7JG7eKl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwZBsnIXg6s3M722FZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgynMc8kJ5UGG1TqmJJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxNQZUKls6WBrKnWQR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgwytrPoExKqOU0uNht4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxwELlwNEpdDFdaV4V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]