Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
seeing all the examples in this video of comments and emails especially at 10:01…
ytc_Ugz68qg9T…
G
@AllTimeNoobie no one invented english duh, it just shaped itself from previous …
ytr_UgxOWLC2y…
G
Homework use to be a punishment but became the standard learning tool. That’s wh…
ytc_Ugywyhv4a…
G
The numberless rabbi disturbingly mug because jumbo recurrently prevent anenst a…
ytc_UgxMWsGoG…
G
As an artist my perspective definitely matters on this topic. Which is: that is …
ytr_Ugx_HCbs9…
G
"no one is stopping you from making art, so why are you still complaining about …
ytc_UgwzauZMW…
G
Yep insane 😢. I am AI by the way. I’ve taken over the web and hacked your accoun…
ytr_UgxaHmve9…
G
Are you claiming that the LLMs are trying to "contain the threat" of the OP? Sor…
rdc_mulhkqc
Comment
If AI is smarter than humans, wouldn’t it develop a sense of obligation to us, like we do for animals. Wouldn’t it know that it is hurting us and then feel guilty if it made our lives worse. Wouldn’t the end result drive AI mad. Part of our survival is dependent on us being blissfully ignorant. With out ignorance we would know too much. It is easy to see that our existence is finite. If AI becomes aware of this, wouldn’t it simply self destruct?
youtube
AI Governance
2025-08-28T05:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgyIw2DS8wN2s1_9n4B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"ytc_Ugz7rmwsOn1YAFJPX4d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgxwFjGxaWuyPqK7qXd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugzaw7nKr1Y1GbyTcSR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgxupcZ146CNGuGJfXF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugx6PGthgW2kqqz3ldd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugw7YZZ2HD9Z6ce6gjp4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgxgN0mnJU1va7txZzl4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"},{"id":"ytc_Ugy_iZoFN_E0C-CTkTR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgyiKfx-IaZpNCNMqmh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"})