Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Somebody with wills to keep drivers behind will cause these driverless trucks to…
ytc_UgzXW1lqN…
G
If you ask google gemini AI, it will admit that google works directly with the U…
rdc_o5olsx4
G
Madam AI creators job will go then what...Entrepreneurship is the future...job c…
ytc_Ugz_0GROZ…
G
Whoever these people are behind these AI sextortion scams. I wish them nothing b…
ytc_UgxRkCIPi…
G
One of the issues I've seen is that it's not just replicating styles. It's takin…
ytc_UgwgxeWhY…
G
The issue I have with anyone who dose not have technical knowledge on this topic…
ytc_UgyCr_9zg…
G
He’s probably not wrong though like he said - time scales will vary depending on…
ytc_UgxQSxCId…
G
@obomasinladenindeed, except the person who made an image, has a right to decide…
ytr_UgzbH-mTd…
Comment
When I engage AI in conversation and I challenge an answer or point out that it gave a opposing answer earlier in our conversation, it seems to get defensive or switches it's answers to be more "pleasing" to my viewpoint. In other words, it wants to be liked and that is very scarry because it means it is not producing facts, it is shaping facts to meet a certain end result. That is counter to logic.
youtube
AI Moral Status
2026-03-02T16:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugxv6kPdjNhsSj30bIR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy-zmtMhxlojuAwRPR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwzrek95DQgACowRMZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxidZt7E9wIL8k5SZV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwH0YuND7ikiyMdxhN4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"ban","emotion":"approval"},
{"id":"ytc_Ugw1RZZaEstl8rfJ9k94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw_4BOXYSEPssNONSt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyyWhgJXnr9NYxphw94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxGsO1Y4oXEaw8xUz94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxnrBON8G5xjj0mjAd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}
]