Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I saw something about being an “AI artist” and it made me both sick and Angry…
ytc_Ugx7SLEOS…
G
Ai is stupid and smart I was wearing a band shirt and It called it a dog…
ytc_Ugxcpciw_…
G
Yall act like this is a bad thing but this is the one good thing I’m seeing out …
ytc_UgwEGoq6u…
G
Anybody remember when people thought how far-fetched The Terminator was? Now the…
ytc_Ugy7Xbii5…
G
AI is like what Spock says in Star Trek TMP, which is a child and should be trea…
ytc_Ugz70FiMW…
G
I made chatgpt so mad by saying 1+1 is 3 it got so mad I got banned by it I coul…
ytc_UgzfqDUgW…
G
What if a day comes when we create AI better than us and who can create Better A…
ytc_UgzPhNMxQ…
G
Putting AI and expert together... it doesn't exist. That's like saying a human's…
ytc_UgxXs-5V1…
Comment
To be fair to the interviewer, Penrose doesn't explain Godels incompleteness in the most intuitive sense. He should just say "any system guided by rules will have things that cannot be proven by sed rules, even if AI" and leave it at that. The self reference through this guy for a loop.
youtube
AI Moral Status
2025-12-06T13:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwFddKvcNqVqDiZ_aR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugwp2AlJg0-v0dDAcC14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxNdpIDHZiVEC-LqTB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxCoQdQBuKfz3ZjH094AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyAhmQ1yTF20ZYfuaR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgypS1GFokEYnSKLtBl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwYlVVip5e9lo3ONrx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugztx6AOG5HvmjWYYzl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyTG4UtRvgNx4_1lQh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgythnpBsnZEmUTtqDd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]