Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We appreciate your engagement with the video. While AI advancements are indeed f…
ytr_UgzZBE4v0…
G
This man understands alot about AI but he doesn't realize that we barely underst…
ytc_Ugx03hasf…
G
Sam is not ethical if he gave AI to Israel that uses AI against civilians. It wa…
ytc_UgyOxadkb…
G
Look, AI currently is not doing all that much. Let me give you solid examples. M…
ytc_UgwwGa-it…
G
Hopefully they can shut it down.
India is playing both sides -- so they just jo…
rdc_lu9wg8j
G
"How long can you be safe from PRYING EYES?" she spoke in an idiom. Talking to t…
ytc_Ugx3hqIwi…
G
Events are good as self-destruction they don't need any help AI will just hasten…
ytc_UgxhseCVg…
G
So... AI and physical automation are very different things. They still both have…
ytc_UgxakliOK…
Comment
The things that keep people from killing each other are not applicable to AI.
AI may not even try to kill us, it just would, like when we step on bugs and not even think about it, we may just be bugs it doesn't think about, AI may start doing something that it wants to do and end up killing people as a side effect, like polluting the environment with things that will kill us, its not doing it to kill us, its just doing what it wants, we die as a side effect and it doesn't bother it one bit... that's just one possibility.
youtube
AI Responsibility
2025-07-03T17:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwKwYaEtyhoNxiE2CF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzfhG6NwNkQSGbPKlh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwU59UlteFqjGqfkDB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz6IG2cwYB7_cj4ieZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzjokweHPVb3VoUIIN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzkOZ9gP_HOkPWjKnJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugymv6MtTbDOOkEH0qF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyiB5OAnSUVxckdrqV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzqE-NQYeV3u8KQ_xx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwVitd1fr_9U2-h3194AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]