Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If they’re similar it’s only because China steals, copies and hacks every countr…
ytc_Ugwi2mdpd…
G
A good question to ask would be, is it possible for AI to develop wisdom? Wisdo…
ytc_Ugw91IMPn…
G
What's interesting here is that we can take notes from this. I just tried adding…
rdc_koqdwt4
G
Construction and building industry will be the only industry safe from ai taking…
ytc_UgwMXUAA3…
G
Maybe one flaw I see in that scenario is that you still need people to consume t…
ytc_UgwJSTy0A…
G
I am a computer animator and my partner is a concept artist. They were unemploye…
ytc_UgwvPlsEr…
G
ai artists will never understand the joy of creating (unless they repent and pic…
ytc_UgxZRQhuz…
G
If an AI technology does the work of a human at a company, it should be taxed an…
ytc_UgyhiuIfN…
Comment
Humans are easy to fool, easy to trick. We've built models using human language. They've been trained to fool us. It is remarkable at how little is actually there behind the language. Its like an executive who is always confident and knows all the words but hasn't a clue... and doesn't even care. I also know the software field and the irresponsible optimism we have. We incredibly overestimate how capable our software is and how close further improvements much be. So we've fooled ourselves by creating models designed to fool us. Its not nothing, but we have no idea truly how very far we are from actual capable AI.
youtube
AI Moral Status
2025-10-30T23:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx1_ez-0vl8tEvhPGR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzM4bqngjE5_ib5sKJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxcTb6i8AUGg19T2n54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxPfOmj4m_Aube5q4J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxE54jNX8p3yYjG0W54AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugyi_QHZ-dhPQu0-UFB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyGk8_0HvVBwUZdVCJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyQLyHJl3d48kzDxI14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwe5HXo6jXaynqJ0ZF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxiMiO945P8eZMsdu14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]