Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is not a revolutionary techolnology and it wont be. Its helpful for people to…
ytc_Ugx6RA1SK…
G
This man is a disinformation agent… people don’t look to him for constructive in…
ytc_UgxwIB5Jj…
G
Just realized THERE'S A FREAKING EU (Exurb1a Universe) IN HIS VIDEOS AND BOOKS -…
ytc_Ugy13RjOh…
G
4:20 "We could all have 5 times as much healthcare for the same price". L O L .…
ytc_UgyKznOBE…
G
What happened to that provision in the "big beautiful bill" that made it ILLEGAL…
ytc_Ugzl89dOC…
G
One death maybe an accident, two deaths of relatives soon after is no accident—i…
ytc_Ugyop90bU…
G
Share price is all. This is that blood test all over again. Self driving cars ha…
ytc_Ugx-HDnQX…
G
I can't see at all how AI seeing an image and trying to understand it for the pu…
ytc_UgxLnE2ZZ…
Comment
Plot twist: Geoffrey Hinton also thinks that LLMs could be sentient to some degree as of now. Of course, this doesn't make these people less delusional. But their delusion nicely demonstrates what AI safety researchers and people who talk about the dangers of AI have been saying for a long time: that AI can (and probably will) manipulate humans and that AI doesn't need to be sentient (or evil) to do harm.
youtube
AI Moral Status
2025-07-09T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgydHNdSW3FAgModj-J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxy0pf18iNLT0aqUHV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxu4LOluX5NpYzkn3V4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwnoQJ9-7hgFFk3XSR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzzw_oWRQ_l1kbOZKx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgymW6vhxNDZ1uPJLxh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzxjMRQpAo4FGa6FD14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyMar1rOVzMgufhBnR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxE9A7XuhsrS1n17YJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwCBBNkSfdbizFdlid4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}
]