Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
With ai i feel as though we will all of a sudden just start to feel a lot more l…
ytc_UgyrZOr3l…
G
Oh my gosh, I've never heard of Jazza, just looked up his channel. I don't know …
ytc_UgzaON1xf…
G
The late great Stephen Hawking said AI would be the most dangerous thing to mank…
ytc_UgyVAtHkm…
G
This is a fascinating discussion. I'd like to share an observation from a long-f…
ytc_UgzxkvREH…
G
I think alot of developers are making claims about what they want to be true, in…
ytc_Ugzv8Rcvz…
G
When I use ChatGPT, I always talk to it like a person and say please and thank y…
ytc_UgzQGjVud…
G
I could never take this AI art outcry seriously.
Its always pros acting like ev…
ytc_UgxPvKr7f…
G
this is an fundamental misunderstanding of how these things work, the further pa…
ytc_Ugx607zXt…
Comment
The fact that these robots are able to think alone is scary, you guys do understand that these robots are just a "car" for what already exist -cloud- right? Like, they take, there's an algorithm that is teaching the cloud to control things like robots and cars and such... Ugh never mind.
youtube
AI Moral Status
2019-09-20T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxOwTqfbinw-cZLSyJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzlpGHnplkMMlqlnNt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyxDgE4x24ySCYBj8V4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz9QVj84XRX_vaqpGB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxY7hky1jk8EVWvFSx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyiwvhgkN_t7ROA10t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwedMcNEtamu5i0Ce14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzJ7h0d7ZYzeSyFLzl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzWec2BAuNHez9e7_B4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy5zEwTsdJfuEfOeth4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]