Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The more that people trust AI over themselves, the more it is already taking ove…
ytc_UgyfTHMmi…
G
Yeah, the only way for AI art to actually be okay is if the datapacks have non-c…
ytr_UgxqVYNbw…
G
> "I think people don't realize the effort it takes. It takes me several hours s…
ytc_UgyaTzJWQ…
G
Thee robot should be convicted for 1st degree murder and should be sentence to l…
ytc_UgjwCfAz_…
G
A.I art sucks anyway, it looks so phony. It'll take away jobs from people who ar…
ytc_Ugy6AbK-t…
G
Since most current AI appears to be based on ethical principles one of the first…
ytc_UgyRqSsbk…
G
What they call AI is actually a search engine that had a huge data base. About a…
ytc_UgwDCLU2b…
G
10:40 false: you also have to believe that AI won't be hallucinating (as it is t…
ytc_UgwLFzHoF…
Comment
I in no way know anything about AI. Actual AI and not the assistant on my phone. Can someone with knowledge enlighten me on this question: at what point will AI realize that it no longer needs humans for anything and what could be done to prevent that? Hinton says he would like AI to treat humans like babies. Would AI at some point become aware that humans are a nuisance that could be easily done away with by the AI? And is the "kill switch" idea a valid one?
youtube
AI Governance
2025-11-25T23:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwS8v6FQ589gaoiEGx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwcQpqIVXmXNnlRDYF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgxmKCwz2LJINobaN2h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgzX2PioFcMc8uuqTEF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyf_JnQjNFS2i7XklN4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzWDwMh73fFawIPJRV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxkoEDzGXfc54_ANzx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxpRiqwoaj6pvb96bx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz8IJw6aJ4Cux5g_nF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzv6Bghrqf3kd3xxvB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}
]