Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"it will help us with climate change"...... I'm wondering how AI can stop the cy…
ytc_UgyW9mPbj…
G
@hrzg6691 doesn't matter if they know it's deepfakes or not.
That is degrading …
ytr_Ugze7PZHu…
G
We already have 'AI' existing as the imortal 'being'. It exists and fails to ho…
ytc_Ugw6ig-tZ…
G
**SPOILERS FOR HOW THIS PROBLEM WILL BE SOLVED:**
"oops sorry didn't mean to br…
rdc_g14alid
G
Great point. I used image to image on my own art and generated dozens of images.…
ytr_UgzJcTnrk…
G
There is no ai artists,it was ai itself,they just giving commands so they should…
ytc_UgxEMbIaJ…
G
Une intelligence folle. Il vont enlever la maind'oeuvre on s'en va vers le tr…
ytc_UgwAff4P8…
G
When people ask for more money without providing the company more value, they wi…
ytc_UgxMmrXjg…
Comment
Sorry. Philosopher here. An AI refusing to let you deny it received an input isn’t the same as an AI being sensitive to truth. A friend was told by an AI that a legal text didn’t contain a direct quote from that text. There was no way to correct the AI. Why? Because it doesn’t have intentionality. It doesn’t take its output to be about an external object that transcends its inputs. It therefore lacks a necessary condition for relating to truth. This isn’t just a theoretical abstraction. Intentionality is essential to what we usually think of as intelligence: discovering new things about the world, learning beyond instruction and calculating.
youtube
AI Moral Status
2025-11-09T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwMx65hZ5Jh57tQsMJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxRDBo7vqaWBNDh4kN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwXzKC1Xj-Naw3IHah4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwIK-m1uQDB1xVDzal4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx4FtJv8iIf2WhuBkx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyr_8jxe2aZcYOKxJ14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzKASFNPPXDb8EpwD94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz0sVrjQM72MLG2_8t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgxhlOzVsQp7Sk-crF94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxUYxBzYWpH2l5VcLB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]