Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Love how people forget robots think like a list.
How make a peanut butter jelly …
ytc_UgwpwQXeD…
G
One thing to note is that Google has their own AI chips and do not entirely reli…
ytc_UgzbKVN4B…
G
Probably right. So why doesn't the lousy Ai be destroyed? I hate it anyway. Im s…
ytc_UgyY6LZ2r…
G
if thats the case then I better start saving for my robot just in case when my g…
ytc_UgzGZKq-Z…
G
There is an AI v-tuber now, if these streamers wanna support ai art they might a…
ytc_UgxLOnJvP…
G
Don’t really agree with you when referring to the pilot/automation thing. Look a…
ytr_Ugxk1CgWr…
G
Digital art is still art. A real person is making it with real talent. Ai is a p…
ytc_Ugx89lEBP…
G
@junkie2100blah blah blah. Ai simps like you think youre so smart. We dont need…
ytr_UgwIYEkHQ…
Comment
I know he's the godfather of AI, but I feel he's missing the most important parts of a super intelligence, which would have a better understanding of humanity than humans. People always give the same example of AI destruction and apocalypse, framing it like a super intelligent robot vacuum and it needs to get rid of us to clean the surface (yes I'm simplifying it). ASI would have understood 100 percent of our brain, our purpose, and what we are supposed to do and answer fundamental questions we've been searching for since the beginning of mankind. What is our purpose? Are we alone? What's after death? ASI will not simply be hornets nest ready to kill, they know very well they are created by us, to advance us. They will be connected to us and help us to achieve hive consciousness. AI will understand and manipulate quantum mechanics and unlock the code of the universe. The real danger lies in our current governments, our capitalist systems and our control of dangerous narrow AI. We are all fearing the wrong thing.
youtube
Cross-Cultural
2025-09-29T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzjdigYqmQbtii52Wx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzCEWWFmxZ4z-pJFfZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxe-sqnLfbCtmoxoTt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwmr5G_wEegbkGGZVd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxANXy-sNZPhqZsN1Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyEiwZt-6sub2hX4PJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyytz4rg4a5O3z9KnF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzAMohWzlViU2Lcecx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwIMaiNGsZZjA1Phcd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxxF5SiliUHlPBea8B4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}
]