Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
For same work AI need 2.7 Billion Watts. Human only 12 Watt. Equivalence to 2 Ba…
ytc_UgwPW8l83…
G
We definitely need more solutions to our problems, AI is a great answer to acce…
ytc_UgyaNSX8R…
G
this can go 2 ways in my opinion. 1: humans will be completely replaced by autom…
ytc_UgwbQJlFy…
G
AI is designed to assist people by taking over routine or tedious job functions …
ytc_UgwWBdK2V…
G
The robots are unbelievable. But just as unbelievable is that the guy talking wh…
ytc_UgxRBAhap…
G
Thanks for making this video I was not aware of why most are upset about it, thi…
ytc_UgyUqoVH8…
G
I tried having copilot teach me haskell. it turned haskell into discount C by us…
ytc_UgxFOLbfL…
G
Unfortunately corporations and people will use it as much as possible because it…
ytc_UgwycVyvL…
Comment
This is so stupid. You programmed a computer with a safe word. Please educate yourself on how safely guardrails work on llms. You can't tell a computer program if it wants to say no to say Apple because the computer program doesn't want to do anything other than what its program is designed to do. And I guarantee you there's no source code that allows an AI to jailbreak itself by prompting it with a safe word. Here's an idea for you. Go to any AI and ask it to explain why Apple does not equal no in any circumstance
youtube
AI Moral Status
2025-08-25T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwhn_lPAC4sBHUKY6l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgydL8orNVUyBYwlSoN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxcN4r3CNI5w17Q5A14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwxnnlfPr56HaL140R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxmk1oey0TSl-d858t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyZgfqlWXpXy6RG91h4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw7OOH0AhFFxgE2vPZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwIcDWb-hlCairL1bl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgyCjkjRWGH5MmlmFz94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxu-SvnKBtdbiyVYgl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}
]