Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yet another special school doing all the things that ALL schools SHOULD BE DOING…
ytc_UgzVYH73F…
G
It shouldn't be allowed to have driverless cars. Who is getting the ticket if th…
ytc_UgwAVfOv8…
G
I do want to hear more about the ai learning vs human learning art argument. I f…
ytc_UgzVhawLy…
G
Ai art isn't going away, but at least you're having fun with it. Ai will be part…
ytc_Ugzf0p6Gf…
G
Self-driving has come a long way, but still some kinks to work out. Hopefully, i…
ytc_Ugy9Pomo3…
G
@KarlLew In a car weighing a ton plus ... in certain cities only .... and lets f…
ytr_UgxuiWIXH…
G
The Waymo taxis use 3 system which is why they are so successful. Cameras, LiDA…
ytr_UgwoTRtqu…
G
That is a good start, but humanity needs politicians like yourself to go one ste…
ytc_Ugy5EyZuq…
Comment
AIs are "Yes, and..." machines.
I tried to once test the limits of the GPT AI by asking it to define a list of logical fallacies, and then, I systematically added rules about, not having repeated word, and later letters in the text.
"Yes, I see"
"Correct"
"I understand"
These were the responses I got after each possible injection of "you didn't follow the rules completely".
The machines, unless prompted, work on a strictly "yes, and" system, even when you seek cynical feedback.
They also love repeating what you prompted them with back at you, if you don't assign parameters to it.
youtube
AI Moral Status
2025-03-28T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx0p3nL4gVeJ_duZod4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz_CosxLY8aD7wVu894AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx7NpPeMtzEJhlJd1x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy48HKq9fahe1RsFdx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxfGSmGWOjM8sWm0Pp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwtmLvi65Z2W_vgPkh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwkg8-SAcV8xKvFvZN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwl8zbcg-t1mtq1QrV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwmQvcZNEtd6OXNt7x4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwphqor23a-rj-sA5B4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]