Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well I know very little about cc plus but I know how to code a calculator you’re…
ytc_Ugw-DRPcW…
G
I believe that trying to make AI think like humans is a very bad idea.…
ytc_UgwHFYpep…
G
The people who dont use Large Language Models "LLMs" or what everyone calls "Ai"…
ytc_UgxUnheWb…
G
I find it surprising that Neil doesn't see the inherent danger in AGI. It's a br…
ytc_UgwBDy0J7…
G
🛑 NO! This is inhumane! Plus, you’re researching mind control.
Do you not real…
ytc_UgwtmhXFA…
G
You're delusional if you think that. The performance is going to degrade as the …
ytr_Ugwx7v_W3…
G
AI is not even intelligent, that's not intelligency. IF what this guy happens hu…
ytc_UgyDmp8vh…
G
I wonder how long until people really that the A in A.I. means artificial or sim…
ytc_UgxpSuLLm…
Comment
This video feels manipulative, you should show the mixed results, the good and the bad, but instead you show only bad cases for AI, so it's not a trustworthy video
youtube
AI Responsibility
2025-10-11T11:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgytlyphVodYISkI0ut4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz_MrrqJ1PTQDvge954AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwZBuey2PGxGjzFxF14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzNIu8Sd_6VQdPXMiV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxASKm6cUcl8yFSruF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw2xWuReIXqbpg18bJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzb_Lk9mr3cWve8g9R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzq5bvEp2jKlH9rMcV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxLXpQMeEU0htZdEHR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxfy9FdAf4CFtxZ6K54AaABAg","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"mixed"}
]