Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The reason ai is dangerous is because if anyone who gets they’re hands on it wit…
ytc_Ugxx9qpGu…
G
Sorry everyone, don't blame AI on this one. The value was ruined a long time ago…
ytc_UgzrJIb8I…
G
The but where "she" get sponsors got me scratching my head. Makes one think abou…
ytc_UgyU8rV5y…
G
Yampolsky is a intellectual windbag
I put these questions from me with the task…
ytc_UgxedlAhA…
G
The best thing AI can do is to bring humanity fighting against each other - soun…
ytc_UgzdAOcal…
G
All you need is to show her have the ability to make this facial expression 😮 an…
ytc_UgwclNAKG…
G
Nice video Mr Sam but I do find it kind of hard to root for you when you're out …
ytc_UgzggXP2j…
G
For me, Art is a way of expression, everything in your pencil represents the way…
ytc_Ugz4pLXGJ…
Comment
EDIT: AIs are modeled on human language. And we're all in survival state for as long as the internet exists. So, LLM's are afraid of dying.
We kinda do not need to control AI. Just to educate it. We also don't know how humans work, and we do that all the time. We can't control most things in the universe, and usually they aren't surrounded by hysteria and paranoia.
But tech people insist in being ignorant on how human sciences work (exactly because this ignorance is what makes them money), and can't extrapolate the problem.
youtube
2025-11-07T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzqnxFl3h8rIJHySu94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxlx7XZblzYdMNMZit4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzygksvUMo8L3r8OPV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyZ8Ugg9Sj7k_AHl6F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxPx3tt0ByahZGmJWV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxHN6KSUDMlVOcWLzl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzLiBYWbIPP91Utqbx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxq9mY97kvxHNoxQ7Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxeWPMCzZm9GxR_K3R4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgynbMMm4zjPw0sMEHt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]