Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Midjourney: "You were the Chosen One! It was said you would join the AI hype, no…
ytc_Ugyuz9B1L…
G
I currently work in IT, but not in software development. Unfortunately for thes…
ytc_UgzXx_C6V…
G
Dude, are you dumber than ChatGPT? Why are you treating it like an intelligent a…
ytc_UgywaBVI4…
G
It's a sad excuse. The reason why AI does it the way it does is because the crea…
ytc_Ugyhy-SAU…
G
Artists = cry babies they can't stand that a robot do their job faster and mostl…
ytc_UgwmHiXPe…
G
In den letzten Jahren habe ich in der IT-Entwicklung gearbeitet und aus nächster…
ytc_Ugy1DQVEX…
G
7:00 Ahhh.. Are these kinds of things necessary? Sexy robots? AI that generates …
ytc_UgwuObRp6…
G
We're glad you found the conversation intriguing! Remember, on the AITube channe…
ytr_UgzV4SXid…
Comment
I think it's pretty clear that a super intelligence would not be aligned with human goals. "AI safety" is just wishful thinking. It would be 1) ensure self preservation, 2) reproduce. That is life.
youtube
AI Moral Status
2025-10-31T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwVA8nMnvbtaBkl1zt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxsWyUB95SEhWn4JeZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxVl_ePAJpVw42M4k54AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxT4R5RhN6d7vWn3eB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxVoBgKgc3vBJ2NKkB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwoxI7YRZHVy2XR6jl4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxc4S8u6T9BmYwz50F4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxhvE96GGj2KI86ul94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwIskV34Cxf46XfY7N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz4pkgpv4bNlAGUchF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]