Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A.I can never be better than the human brain. Human brain uses approx 20 watts o…
ytc_UgzUelZdj…
G
rights? depends on the robots sentient class level its job and such but for the …
ytc_UgwyPFEzr…
G
Just proves that every tesla robot we've seen as never actually been acting auto…
ytc_UgwAE-CMh…
G
I think its not enough to hope everything turns out well with AI, we need to pre…
ytc_Ugy48er0d…
G
Good approach. For me, anti-AI or plagiarism checker should be reinforced immedi…
ytr_UgwFb9gx6…
G
Progress is progress, at least! It's a good sign. Hopefully we will see progress…
ytr_UgwEMV2TX…
G
Licensed or not, AI still gonna saturate the market with ocean of content and no…
ytc_UgwPcSEY0…
G
Talkin about AI safety to me feels vague and pointless. 90% can make safety guar…
ytc_Ugw_ICeyh…
Comment
There is no reason to believe that machine digital artificial intelligence will supersede human intelligence. Human brains are analog, no one understands how they work although I have an inkling. Moreover, market forces run collective decision-making, beyond the ken of any digital A.I., i.e. God exists and runs everything.
youtube
AI Governance
2023-04-19T00:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyT4tOPh_rg-Z_Kyw14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxp9oFWw1zirFwL2PF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwOiSaSw0S7-eC4C4l4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzbywtiTzNeTUwMUl54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyv5XTkm8cpAsLyOQV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxwEwKvHZWqs9xhTcx4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy6xlHgLjheZLI18UV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzb6q9fq6EI0dx0MOF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwLN4C2Hgc7PfBt0K14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwnvKwaEoWxFmYv_EZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]