Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Only thing is musk isn't a scientis he is a business man he "invented" pay pal b…
ytr_UgwpDqT6F…
G
Having artistic ability takes alot of work, takes alot of effort, no one is born…
ytc_UgzwQEdO3…
G
> Domain-specific LLMs are going to be common in the future.
not necessarily…
rdc_jkpbt3c
G
Existential risk posed by AI is not from AI as the dominator of humanity, but o…
ytc_Ugx1ltpCl…
G
And just so you know DA has done NOTHING to fix the problem other than say "clic…
ytc_UgwhtkhWq…
G
Totally agree. I want to add a few problems I see with making AI "your partner" …
rdc_mliyw0p
G
It's great to see such affection for Sophia! She truly embodies the spirit of wi…
ytr_UgzHvHyTY…
G
It is interesting to look back at this video, after 5 years. Two big things had …
ytc_UgxMb7yx_…
Comment
arguments on grave danger of ASI look as rock solid as they get. They are obvious and unavoidable, especially if we'll consider two things - AI is trained on human data at the beginning, and we all know all too well what humans are. Number two - ASI will be, to call it correctly, an Alien Form of Life which is much more cognitively capable than we are and it's based on totally different foundation compared with biological life. Alignment is completely impossible all by itself, but in addition we need to remember that due to different nature of ASI, it's internal native system of values and priorities will be completely different from the biological life forms, most likely opposite. It's clear what it will lead to I guess.
youtube
2026-04-17T23:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugy6HE2TbDx8QWWGkdx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"},{"id":"ytc_Ugyd-BS-5fkIKd-vO894AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgzS-A-3YLunNd1Fb_54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},{"id":"ytc_UgzzO5yKhZJOfQq5V5V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugzu7M8ticLFUXRUWtB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgwPTddvtcxT3V2qZpl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgzVSBTKlvhXdy8IChR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"indifference"},{"id":"ytc_UgwO97KdPSH2YaXzJLB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgyK5z0rsH6Jy2wRe454AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgzmMCjRRvZ4Nx_JweZ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"mixed"})