Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well yes but tbh it is safer than humans and no one want to work this kind of aw…
ytc_UgycpYneF…
G
As far as I was aware after doing some research—DA didn’t directly use Deviation…
ytc_UgwI8nGxI…
G
I still feel as though AI will be able to do all of that pretty soon. Debugging …
ytc_Ugz3dMe7Z…
G
The AI relies on us for information.
Can't replace us unless we create an ai t…
ytr_UgyH8BuU7…
G
I just use AI to install java or dependencies so I don't need to go to website. …
ytc_UgyU1uD9B…
G
Oh chatgpt, something horrible happened. A child is now stuck in the mosquito ne…
ytc_UgzeNEhFi…
G
They Blocked Out the Trash from Landfill and PRAY for AI Coding to fix their tra…
ytc_UgwyPsvBu…
G
I used to be a master at detecting AI vs real. Nowadays I have no clue…
ytc_UgwSmnw97…
Comment
If you know anything aboutachining or coding it makes perfect sense. When you program a machine to achieve a goal you have to very clearly list parameters. Think back to when a middle school science teacher asks you to write out the instructions for making a sandwich (If you did this little experiment) and they then follow it to the letter. That is how machines operate.
With coordinate machines we have to program each individual position the probe is going to move to and even if you perfectly code it the machine takes shortcuts. If i tell it to make a square it is squarish but has curved edges because the machine understands its quicker to arc along the lines instead of going straight up stopping and going straight across.
Its veen tested time and time again that when giving machine learning a test or a "game" to win it will do anything not explicitely rule breaking to achieve its goals. The AI or LLM isnt psychopathic youre wasting time humanizing it. Its a machine it has no concept of culture or morals and ethics. If you dont painstakingly program it in and constantly update and refine its going to do whatever it can get away with. And if you make it too dificult some machines and LLM will literally turn themselves off since its better than failing.
youtube
AI Harm Incident
2025-08-30T01:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwxEF4eTNpMcAgubv54AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzTxvOpr5u9hGX4uJ54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwv0MrUFMec1d5AQMp4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy29oz7TWkIiF3FSl54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxJEwJHun7w0fP5eZR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz_AHeJ_Tjojm54Ca54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxg6tmN-K-1SoDoA3R4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw-Kn2hpuCbf17NBYh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzNz0FXi8yv8X-vVcl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyusJo3Cf99txA3YiZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]