Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Anger toh ek emotion h, but actually ye toh ek robot h isme emotion kaha se aaya…
ytc_UgxOSya7z…
G
Ai trespassing artists when I kick them out of my reserved seat for the minecraf…
ytc_Ugw-K9gLo…
G
🇨🇦 Ok folks. Dev Tech here. AI IS NOT REAL. It’s a program.. NOT A PERSON. God b…
ytc_UgyLB0Fe6…
G
The beginning of the end of humanity, glad I was born when I was (in the 80’s) s…
ytc_UgxS-YtBu…
G
What makes it unreal is how realistic it is. producing scene like that would spe…
ytr_Ugyb45hs_…
G
Growing up, i read a story.. 3 smart and 1 dumb kid were walking thru a forest w…
ytc_Ugz3yJDhj…
G
Idk, I think I would have to try and study art ai algorithms to fully understand…
ytc_UgxOmfiJu…
G
Driverless robots need to be shut down the second they start killing people in a…
ytc_Ugz7BrrdJ…
Comment
I appreciate it! I've said it for quite some time now. Thinking that you can _completely_ automate multi step tasks with a process that **cannot** know if it's right or wrong!
I saw a comment from a software dev a while ago. He was running an agent for data retrieval from a server. It was basic search/query stuff, going quite well.
Then the internal Network threw a hissy fit and went down, but the agent happily kept 'fetching' data.\
The dev notices after a few questions, and just for shit's and giggles I suppose he asked the agent about it.
And the LLM's first response was to obfuscate and shift the blame! When pressed it apologized. The dev asked it another query and it happily fabricated another answer.
This in my mind perfectly demonstrates the limitations. It didn't lie, it didn't know it was wrong.\
Because _they don't know **anything**.
And yet, the amount of people just here on Reddit who are convinced it is conscious, or another form of intelligence etc etc. it's quite alarming.
reddit
AI Responsibility
1754838164.0
♥ 30
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_n7zlfzi","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_n7y5um3","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"rdc_n7y70lu","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"rdc_n7y3hbp","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"rdc_n7y7p5f","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]