Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You have found a market inefficiency, now is your time to shine.
I would pay f…
rdc_myt71xb
G
"They" already don't have control. The background scroll of voting totals showed…
ytc_UgyjUWTzp…
G
@areallycoolhat5427 Oh, AI still has limitations. But those are being overcome. …
ytr_UgzOmhwHv…
G
AI is far more ominous than the atom bomb. All these founders who developed AI a…
ytc_Ugw_lS5Ed…
G
I went to school in Germany, and in our art class we looked at pictures from Mon…
ytr_UgzWq2ttS…
G
Give us just one example of any 'reward' that you can give to an AI that cannot …
ytr_UgweztkmW…
G
It's not about whether it was made by human or not - the reason AI art is boring…
ytc_Ugw5q1mCS…
G
The video explains how Anthropic turned safety into its core competitive advanta…
ytc_UgyVR25_o…
Comment
When they are saying its a 10% chance the premise breaks (all AI)? Or are they saying that 10% of their specific AI will break something?
I mean, the numbers are made up anyway but I want to understand what these folks think they are doing.
youtube
AI Moral Status
2025-10-30T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxgrK6C2Uao6798G7R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzrYwQ_ZYtGkegqHtV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw-_boNT2UHH-KKDep4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxtpzWAN0_e8eE9p-F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx4EJsMOUikWacNTml4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxF4bXUctfpg4nSK9h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyN9kO7i9XbC_VyJI14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzaOS5tyiTeC6YSXLd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz9FH0P2EV96FON3Yx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyIkkQde0j9HOJ2gU94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"})