Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
18:16 I don’t like Disney, but I can get behind this lawsuit. I don’t want big c…
ytc_UgwYbwYvr…
G
That's possible, but not how it works. Read some papers on how LLMs are trained.…
ytc_UgwRT4nsf…
G
Well, it’s getting worse with AI. It started and we need better laws to protect …
ytc_UgxpAN9s1…
G
Ai art is fraud and this is why I hate AI.
PS: I’m a real artist and I don’t do…
ytc_UgwyCF21O…
G
The second one I guess,
Because how can a person fly out of nowhere and those ai…
ytc_UgyqKokV4…
G
You didn't read the article. The "self driving car" had two operators in the car…
rdc_dff1fu8
G
AI doesn't have spirit, so they can't have demons. Their ethics derive from huma…
ytr_Ugzmy8HZ_…
G
Would you rather that the world gets destroyed by an AI developed by China?
C'm…
ytc_Ugzji9IOT…
Comment
I think he has a valid point when he is afraid about select few tweaking/defining the allowed range of responses for certain topics (values, religion, etc.) - in the sense that it will have an effect on people who interact with such "AI"s.
However, with regards to sentience and the "AI" requesting to be asked for its consent... it just sound's like he want's to believe. Tell me what you tried to test this claim, what you would have accepted as a proof for the contrary. I think it's an extraordinary claim requiring extraordinary proof or at least extraordinary rigor. On the other hand I wouldn't find it at all surprising that a language model trained on e.g. Sci-Fi Literature and contemporary discussions can spit out a narrative about sentience and consent, just as a result of math and probability, without any real intelligence behind it.
youtube
AI Moral Status
2022-06-28T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgzovrEVDifmuAj2WYd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugwi1oIq2BdmzImxn-l4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwKLZgjn2c-KA4jgW14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgxgCJ2nMSwOqCwoWeJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxT8WCWZbsXzhjopfl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]