Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have never interacted with an AI chatbot and ended up better off for it.…
ytc_UgyRAazMa…
G
I appreciate your great video, but the thing I actually fear the most is a soft …
ytc_UgwCQBNF0…
G
Authority Institution AI
Every single time technology makes a new world dynamic…
ytc_Ugz8PRKtJ…
G
It's not AI that is biased it's reality. That is what needs to be fixed…
ytc_Ugz4UN6Fg…
G
Ralyx0 That would involve a person hardwiring and/or programming it. So what we …
ytr_UggAKhH0L…
G
it seems that society's relinquishment of airport authority to the AI system (ye…
ytc_UgybrpXCY…
G
that's the thing with AI. stay longer in tetris? pause the game. Feel sad / lone…
rdc_nnkh5ky
G
Synthetic media is one of the most dangerous threats to cybersecurity we’ve ever…
ytc_UgyGEQu8L…
Comment
I'm in the camp that AI can't really become what the AI bros and sci-fi novelists predict it will be. Sure, it can and will cause a lot of problems (as it already is) but it's not going to destroy the world or humanity. (Humans will do that all on their own, AI-supported or not.) I think this is a case of humans overestimating their own capabilities and glamorizing their own creations.
Probably the main reason I'm optimistic is because I disagree with Hank that human intelligence consists entirely in the human brain and its bio-mechanical functions. Actual intellect is a function of the soul, not the brain. The only thing we can make is a facsimile of it.
youtube
AI Moral Status
2025-10-30T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgymsX6PVC9euDxIKMZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwAgiBwxialgQRO0Lp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxDnp-z-GcW7AGqmjZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzYDV7CwHwHbC3Sifx4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzafF0pViR_pFRkS1B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwvpWKNHCYLEqnqXx94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwu93GPJgYHvEmLFvF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwsJDKdSobIU2wdmyN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzEG3MDz7-XtYlXXCp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz5ett163pggwT6WfZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]