Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Right. And all cars were supposed to be self driving 10 years ago. And the Metav…
ytc_UgytKlX8S…
G
To make matters worse, AI like this makes "SEXTORTION" possible fir anyone.
…
ytc_UgzA9oelW…
G
@Scriabin_fan yes I agree that some types of programming will eventually be obso…
ytr_UgxuXIE00…
G
Tell them to see if the act differently to see if they are human and if they tal…
ytc_UgxrRpxO9…
G
There isn't really a way to stop people using ai in these situations because wor…
ytc_UgzM9asui…
G
Elon is SPED in the head AI should be regulated like anything else people can ge…
ytc_Ugz-CFWmw…
G
It definitely has its quirks! The interaction between AI and humans can feel qui…
ytr_Ugw9FEhas…
G
"ML start to think likes human using AI"😂
ML and DL is used to built AI not othe…
ytc_Ugxb-EevP…
Comment
Yes, Čapek describes this in his book called R.U.R., robots are doing certain things and if they, for instance, put their arm somewhere they're not supposed to and some machine smashes their arm, they wouldn't care. Potential pain is increasing their awareness of danger behind certain actions.
In AI, actions are usually measured in fitness. Your AI is trying to get as much fitness as possible, so if you were supposed to (let's say) hurt someone, decreasing a fitness would be a virtual "pain".
youtube
2013-06-08T20:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxbkTqLkjU3uXlH5yV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzfaB7o_9NV7eTuvud4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzJE9RpB941BwPu5UR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz6DkWZnmJ8bc1cgy14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwSVeMt9jSBCsBJQ9p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy4Y_1s-dX3uM54znx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxlX9V7X6EzlbA6WTl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxia4Nuzlppp-ihIAh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwefLJ2c6fGafnjCW54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx8coPjG5_AdbX1Qh54AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"}
]