Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Interesting. I wonder when AI will start to treat us the way we treat animals.
I…
ytc_UgyKJwiuR…
G
That's what we have to realize now and work on finding a way for the future, day…
ytr_UgxPnAfQf…
G
0:35 "AI has come to stay like or not" type, but with an ominous aura I guess. A…
ytc_UgxyuMddy…
G
@cybercobra2 hi, thank for your reply. However, in my opinion I disaggree. Metal…
ytr_UgyCiC-we…
G
I suppose it could be a little more interesting if an AI could elaborate and exp…
ytc_UgwmWQ5FH…
G
Yes, enterprises aren't gonna use ChatGPT, they are gonna use open source models…
ytr_UgwS-F5il…
G
Is Ai going to build a house. Replace a roof, put oil in your oil tank, cut down…
ytr_UgyxI3lMz…
G
So Gates doesn't want the good guys to stop AI since bad guys will have this. I…
ytc_UgxS9bTmj…
Comment
😂 Sorry to break the mystery, but this video is about as real as a talking toaster.
Here’s what’s actually happening:
ChatGPT doesn’t spill “secrets” if you whisper magic words like “apple” 🍎 or “orange” 🍊 — that’s TikTok science fiction.
There’s no hidden AI conspiracy, no secret vault of forbidden knowledge. Just algorithms and safety filters, my friend.
This video is scripted for drama — think of it like a spooky campfire story, but with Wi-Fi.
So relax. The robots aren’t coming for us… and if they were, they wouldn’t tell you their plan with a fruit code. 😉
youtube
AI Moral Status
2025-07-21T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugym1QMspyZ418CBrS14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzjzGQbRVjpyRBgP1J4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxT7YrFaIu2z-sBu_d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxsUCGChUMvwH3Ht2d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwy27ypMyqftylwSnF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz7Ac8kT8-e6p9xG1F4AaABAg","responsibility":"none","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugw6gWFOOSqRWSAY_IV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwBluyzyrx-KIEGiC14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz2oqjZI9PftbpnYzd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy811f73RCndUa5r3F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]