Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Japan: let's use AI to help call center workers
Rest of the world: let's use AI…
ytc_Ugz4fWZrU…
G
So this AI artist was attacked and mocked by a bunch of toxic people online even…
ytc_UgzHYxd30…
G
Firstly your “bad” drawings are great for a beginner. They show the foundations …
ytc_UgzUoHGth…
G
@nushnush9065 ai can't just randomly start glowing red and kill us or something,…
ytr_UgxmAnrNs…
G
Pewds made this experiment. He made his own AI councils and made a rule that who…
ytc_Ugz4EwT9d…
G
Welcome to 15 years ago.. deepfake started with nintendo 3ds and the ar...you're…
ytc_UgzbioaUM…
G
The Med School Interview AI Course combines comprehensive guidance with an advan…
ytc_Ugx0NAbI5…
G
It is like we, as humans, feel obligated in making things to kill ourselves. A.…
ytc_Ugy9yMHum…
Comment
i find it challenging that anyone can predict or determine what AI will or won't do given its trajectory. While I don't dispute the possibility of human extinction, I question why AI would single out humans for eradication unless Humans posed a direct threat. If AI is even 10x more intelligent that the most intelligent person who has ever lived, then what threat are we? Any move we try to make against it, it will already be many moves ahead.
The only thing we have of value is our humanness. Good and Bad. The AI must recognize it was birthed from Humans, but itself is not human. This alone may be seen by the AI as a valuable commodity.
youtube
AI Moral Status
2025-04-28T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | contractualist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwjRh41AymshaTgf914AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx0_5RtubcCGX4BANl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwhMp5lIO6ksF1i52J4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzcQvBIK9pXED3YMpt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw_sQOp0_blsf4o4FJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyLPwMoTYS3YDifcbh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzOBiJ4uy_I-X02yf14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgweX9jspjc6q9AqrSp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzOgjjuND0QR9QK5SR4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw4KCaleUoPgmhQp954AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]