Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
LMAO, she is asking about fake images lol. She is so clueless as to what AI can …
ytc_UgzIpImJ9…
G
The body language when he asked about the Turing test. You can tell the AI doesn…
ytc_UgzkH1g4x…
G
Ill admit, I was almost caught off gaurd at how realistic it was. I almost suspe…
ytc_UgxHQjlmL…
G
That the most terrific whith all the ai issue nowaday
Is the gaslight of the Ceo…
ytc_UgzDNubj7…
G
As a indie dev i cant agree more. Ai makes people really lazy because they dont …
ytc_UgyPLIy8A…
G
If you think these AI chicks won't learn how to manipulate like real chicks, you…
ytc_UgyoX3yAS…
G
AI sucks when the image is 240p lmao they need better security cameras for ai de…
ytc_UgyLFLOR4…
G
@laurentiuvladutmanea "Not how these the programs „do it”. They do not actually …
ytr_UgzUDOkJL…
Comment
Option one: If one country achieves AI supremacy first, will that inevitably lead to war? 🤔
Second scenario: If all countries are competing in AI development, the drive to advance the technology might force them to accept increasing autonomy in AI systems. This seems somewhat logical, as independent decision-making and self-monitoring could be essential for progress.
However, would this autonomy necessarily lead to AI becoming conscious? I doubt it. While things could certainly go wrong, it wouldn't necessarily be due to AI developing its own conscience... 🤔
youtube
AI Moral Status
2025-04-27T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyzVC-CmhOyGpmG3Sh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgyCs17wldP2ZXCXIV94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzaUXrEj4xpJo07c954AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgwO4WZEjpkl9TuXiP14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx77ya9jPlc1hu3B8N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzJA3dfUFYvxzc65it4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzXB3R7cdDGJuXGoIF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyCFdb0nzWcZTRdgc54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"unclear"},
{"id":"ytc_Ugxku6qVdBVRYbQa5hF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwDbn2cbLG-zAbjJ-x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]