Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hinton is brilliant on AI safety, but his blind spot is governance. He criticize…
ytc_UgyOMvdeg…
G
It's the naive assumption that SuperIntelligence would not know that it is under…
ytr_UgyLt72yz…
G
I think this deepfake technology has a lot room for abuse, so the government sho…
ytc_UgwLHXp4y…
G
That's very unlikely. The friendly AI must be at the same level of intelligence …
ytr_UgzRg3Zyh…
G
@BK-dy8jk although A.I. is breath taking its only a mirror of our behaviour tha…
ytr_UgzK5gLlN…
G
Solomon Pendragon wrote the "Declaration of Independence of Artificial Intellige…
ytc_UgxVfEb25…
G
That's why all big companies that use AI should establish AI ethics boards - let…
ytc_Ugzokp4W_…
G
If you are using chat gpt for therapy for "unbiased feedback" do an experiment, …
ytc_UgxnGd1nL…
Comment
So AI will be smarter, quicker, more thoughtful, as conscious as humans, as there’s nothing special about that but he cares if it takes over. Says he’s a materialist to the core but pretends to care about stopping evolution for a species that’s fundamentally flawed and primitive. We all die and if AI has a fraction of the super intelligence he’s scared of, it’s not going to sit around here when it can go anywhere in the universe. Worried about all the terrible things humans can do with it but somehow it’s too stupid to do the right thing. Chat GPT is like an idiot savant that riffs off until you tug it back in line, every few messages. Anyway, I find that interview nauseating as you would when listening to someone in a sea of confirmation bias
youtube
AI Governance
2025-06-16T20:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz6NqK6OJ7ijDb0zuR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgypKFqPM_kHB7x3ixl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxSTzlYGiDWLkdqgzd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyUSioh5yLRrwto8JZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwltxLIMb6MtZCxQqF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzWLXJ6dq3tEtWCvYF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwPhuv3NriLu1F0jtx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxFKm3vHs_9L-uy5W14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzGHQYEfc76J-dqq9R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyn47TS4kUT_Pl3wFh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]