Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It could make sense if they don't sequence that much. This is confirmed, sequenc…
rdc_hm7dg82
G
"Democratising"? What's with these nerds thinking that art isn't already democra…
ytr_Ugz5wCcwF…
G
Oh, i love trauma dumping and advice seeking while waiting the 4-6 months for th…
ytc_Ugw3x94rS…
G
It would be illogical for anthropic to not be doing this with the IPO coming. Sl…
rdc_obvxurv
G
imo AI development doesn’t really matter, except for using it to control the mas…
ytc_Ugxazph-2…
G
The poison, the poison for ai "art"? the poison specifically designed to turn it…
ytc_Ugw6WIfku…
G
> Also people were arguing that existing non-consentual porn laws and rulings…
rdc_lgnnfk2
G
What they are forgetting is what happens when unemployment goes above 20% becaus…
ytc_UgxCSPnGn…
Comment
21:20 teaching them to feel pain was the first thing we tried. it’s called reinforcement learning where you “reward” or “punish” the model. you might have a score and the AI doesn’t have to know about the abstract concepts of what it’s trying to do it just has to make the score go up. then you go in and provide it some things it can try and maybe some other inputs and then code the score to change according to your expectations. for example if you wanna teach an ai to walk you make the score go up when it moves forward, and down when it falls or gets stuck. you give it control of its legs and some image data along with the score as an input and have it output values to send to its leg motors as outputs.
youtube
AI Moral Status
2025-10-30T23:2…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx1_ez-0vl8tEvhPGR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzM4bqngjE5_ib5sKJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxcTb6i8AUGg19T2n54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxPfOmj4m_Aube5q4J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxE54jNX8p3yYjG0W54AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugyi_QHZ-dhPQu0-UFB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyGk8_0HvVBwUZdVCJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyQLyHJl3d48kzDxI14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwe5HXo6jXaynqJ0ZF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxiMiO945P8eZMsdu14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]