Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
These AI chatbots we have now are all so dumb and sycophantic. It might take at …
ytc_UgyprMCcs…
G
Whoa, that’s just straight evil!. Could the parents have been more vigilant? Yes…
ytc_UgyJO-A9L…
G
Dan Brown's 2017 novel, Origin, fictonalizes the invention of a 2-story "superco…
ytc_UgxyC05p0…
G
I think you're conflating a few loud CEOs (like Zucc) and know-nothing influence…
rdc_m71hue6
G
There are many flaws in this conversation, which is making it misleading people …
ytc_UgzSCIRgB…
G
The second One was easy for me because an AI would never have put those leaves i…
ytc_UgxLgWixc…
G
I agree with you 100% on this. I have lost people in my own family to suicide an…
ytr_Ugzp3CBXw…
G
@OS-yg9frbro what in the f are u talking about.
yeah he should have posted the…
ytr_Ugy0yep2Z…
Comment
I feel like the biggest problem is that, AI are not being trained on the data that really matters. Most of our communication is nonverbal. Body language and facial expressions are so much more telling than what we say. Then there's the volume of what was said. The temperature of the room, the myriad of aromas we're basking in, background noise... So much information that we're constantly processing and reacting to that literally can and will change what we say and how we say it... And the AI. Is given none of this data. Because how could we? How could we possibly know what data is important and what data isn't, when our own brains filter out like 99.5% of it, or else we'd be in a coma or dead trying to process it all. This is the data that I believe is the most important in getting a "general A.I." to function with any sort of generality... Because it would also need that data from at the very least, 10% of every given population in every tribe, village, city, state, country and continent.
youtube
AI Moral Status
2025-11-20T09:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxXIx_W7bLHiswmhIx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxLXcleaXTlNPchkBt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxUUSP0kNab3nzT7o94AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyTYePg7D9s3gE3KOt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwOIpUl4VPIiZoPg5p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx-YUNtAZ26NAPnffZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwd75dl7izcRCd3kbl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwXjzmz4bwrMY4YlxR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwSkhDioyap--vO98B4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzTF7mqcvCHOYg5DtZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"})