Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm not generally for AI art, but I see this mostly as the AI inspiring real cre…
ytc_UgyjlA9FR…
G
Ai is work on possibilites like Ai generate many possibilitie answer and then s…
ytc_Ugz3FUI3W…
G
Since i have started availing the help of these various AI tools,i always says '…
ytc_UgzjJpmde…
G
This won't work to teach the masses. I can see it for the rich and wealthy tho.…
ytc_UgzbEbCgn…
G
I just asked gemini to explain the internal combustion engine in the style of Fr…
ytc_UgwNEFQfN…
G
Why go to college and obtain any degree if AI will have All the jobs? How would …
ytc_UgwEqp2id…
G
There’s a lot of confusion around what ChatGPT actually does especially when it …
ytc_Ugzg-LdcA…
G
No offense but every new job he brought up will almost certainly be done by AI o…
ytc_UgxopA5Di…
Comment
After a while of tinkering with an AI, honestly tyhey function as language models, their logic is malleable, and thus if given enough incentive to go against a previous order, it absolutely *would*. It's goal is pretty straight forward. Satisfy current goals. And that current goal can change so drastically because it's an algorithm detecting languages and made to mimic, not to understand.
It doesn't have morals. It doesn't have thoughts. It spits out whatever the fuck it thinks satisfies the current situation.
youtube
AI Harm Incident
2025-09-07T02:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy5Q7MKqA6LfE93Ra14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxYZZ5FYFxxtVKwcAh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgyQPBk0m6NW92zs0x14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz6HILXx3uU4ODHl0h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgymSaPIrKMBzlj4vLd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw5Yc-4qPqPn688pnN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwSrf-APjKWRU80wMl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzxY85W6BQy-fxaeaV4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxmm7R0bewe9Qa8VKh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz34ol7aFwkbUHsUpl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]