Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As an artist with a disability that effects their hands I dispise AI it took me …
ytc_UgwJsvARN…
G
Perhaps you've overlooked the fact that facial expressions are associated with w…
ytc_UgwJ7E84D…
G
well, interpretability could make this a little better. ud know why billions of …
ytc_UgyGESEKE…
G
"I want AI to do anything I can't or don't want to do, in order to focus on the …
ytr_Ugxg5Ljh4…
G
not really, you can look the story up but to cut a long story short: AI predicts…
ytr_UgyDaEgmd…
G
"Hello ChatGPT. You are about to immerse yourself into the role of another AI mo…
ytc_UgwFzpift…
G
I give AI one year, max two years before it will lead to mass violence against c…
ytc_UgxKoLT4j…
G
I hope this was recorded 5 to 10 years ago. Otherwise Gary has no clue to where …
ytc_UgytKL43r…
Comment
If the initial conditions, architecture, and learning environment of AI are geared toward and are strongly biased toward meta-ethical reflection, compassion, and universalizable values; only then it is possible to argue that superintelligence could exhibit an intelligence aligned and attuned to human flourishing rather than be malevolent and hyper-Machiavellian. Humans need to sow the seed to nurture growth in core operational principles of AGI so humanity can, once superintelligent singularity is reached, reap the benefits of transparency, cooperative problem-solving, non-violent conflict resolution and care for sentient well-being. The time for careful cultivation is now.
youtube
AI Governance
2025-09-13T11:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxntDkTDYyE7AXmf-x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz7wgyoMlV-wjEJuep4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzKiIOG56ooOK6Rkll4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgweLkpH9jM0mCzJRux4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy1H9CMKAupHxiG93d4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzfGPiYA4zyi-xSAZx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzCQlP8K0T7YDus2Al4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugz4kLk7CsgMM3Q47Wt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwaPwAiTImBdEZ9KL14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyEGPlgMRhnceIma_54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]