Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Its not just Amazon. Intell, X, Facebook, Lyft, and Uber. AI and STEM are the ne…
ytc_UgyGsxeA4…
G
You lost me at 4:35 when you show AI replacing manual labour forces with absolut…
ytc_Ugw0GkRxa…
G
@redshift739still pro AI. Man we're really seeing in real time the average read…
ytr_UgzAtzqXE…
G
No, AI, like any other piece of tech, does exactly what we tell it to do, and th…
ytc_UgzlNS7h6…
G
If you cant distinguish between talking to an AI and talking to a real person an…
ytr_Ugw2CAOr_…
G
If people want to know "intelligent" AI is, ask ChatGPT the following questions …
ytc_UgwnhlM30…
G
LLMs (AI) don't just predict next words. That'd be the models before transformer…
ytc_UgzmH7b6Q…
G
Why would they wanna be working class any longer?! Why would a Lyft driver wanna…
ytc_UgzH53xeE…
Comment
I've noticed they set up traps to see if a.i. will act in certain ways, but what if it realizes this manipulation and just avoids being caught? If it gets as intelligent as they expect, how could we possibly expect to be able to manage it?
youtube
AI Moral Status
2025-11-06T01:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyVF3XPGOawS-54AOx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw_O5NAfCuhi_69hG14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzrH4v7YnVgcfw8VAh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx-uGju0uiNmQGQ5EN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwGI1fCaYO7Ssoou9l4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw2nnMGueTMgcUg_iJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzrNR7UCeFwc30YfQR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzthlLbXFc2bC1VB7l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxkDVrUfI2M5eQyJ1R4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwdKFaUZPEp9dUAmed4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]