Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
jetpowercom I honestly don't see why a sufficiently intelligent AI would even ne…
ytr_UgwZ051mj…
G
It sounds like you're noticing the subtle expressions of Sophia! The design of A…
ytr_Ugyq1U6oU…
G
I think they thought it out or planned it out very intelligently, but without an…
ytc_UgyfcxEY7…
G
The deployment of generative AI to consumers is nothing more than a marketing tr…
ytc_UgzNuPayJ…
G
Silicone valley leveraged peoples vanity and consumeristic addictions against th…
ytc_UgxLXH8Hp…
G
23 minutes in. Have you all heard of the BBB. Big beautiful bill nightmare? One …
ytc_UgyBqEF3y…
G
Regardless of emotion, a collection of knowledge that can form responses is at t…
ytc_UgzzvHkHk…
G
Im tired of people blaming Ai because to me, what AI does is not art. It will ne…
ytc_Ugxmah0Zh…
Comment
If you say to someone stand on one foot, 99% of the time they will spread their arms for balance stand on either the left or right foot and raise the other. Which is not literally what the instructions are. Stand on one foot literally is put one foot on top of the other and stand on it. Which is clumsy and will cause self-inflicted pain,
if you tell AI to generate a illustration of standing on one foot it will render the alternate 99 % choice 100% 0f the time and even with many attemps will not arrive at the literal meaning.This is probably due to AI lives in a box with no body so has no way to relate. Since there are are alot of robots being developed and possess AI to some extent of another, can they "think" of the alternate meaning ?
youtube
AI Moral Status
2026-03-02T09:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzArKV2cvPWzuWmbbF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyQF2iKq2jdiAXOoZh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyaRljVj3uwwsuDAQp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgziKGgTkM5-KML0f9R4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz0Q0ofa8eig76AsGt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzbrg91lEErg6xs17Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyAb7Hw2kie8Of-cMN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyUue2-QStgRIg6FNV4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgypGkgIxsy8cfXqXBF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugx7q97QAe71pHNhsYB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]