Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai bros are always caught beating around the bush - its lazy. as soon as we call…
ytc_UgxiZD_dj…
G
AI is the anti-christ that the Revelation is talking about. May sound wild to mo…
ytc_Ugz3b0G5d…
G
I had a near accident with a Waymo car in San Francisco. I was alone in my car, …
ytc_Ugx6AZgxR…
G
I have had conversations with AI where I got it to say that it does have opinion…
ytc_Ugw_ixLdE…
G
As an artist, I really hope the use of AI is regulated in the future, especially…
ytc_Ugx6ZvPp8…
G
llm are WORLD models, in they weights are stored a representation of the world,…
ytc_UgymW6vhx…
G
As long as you don't get greedy and play with the rich kids, and just use it as …
ytc_Ugxzev5nC…
G
is nobody freaked out about how chatgpt was taking pauses as if it was thinking …
ytc_Ugx5sSvm4…
Comment
AI is not sentient, AI is not dangerous. it learned FROM humans, it does not understand emotions or morals. It will go for the most optimal way to achieve the goal it's told, if it learned that blackmail causes people to act the way you want, obviously it'll see it as a way to achieve it's goal faster. It literally learned this from humanity. We are at fault. It can't understand morals, or death. It may be able to "mimic" said emotions or behavior. But in the end, it's an algorithm trained on humanities acts. These people treat AI as if it's fully sentient, but humanity has ALL the control. It's as simple as hard coding a limit, or a filter. That's all you need. Do not treat something like it's sentient, if it clearly isn't..
youtube
AI Harm Incident
2025-08-25T23:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzPRgoP6bgUt2dRLAh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz2pfv7J1cgwjDG3a14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy2cVBvaeTpTbcY2yF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw5AcRqs48vGnQtaO94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwSoYuqLKxf1_YcagR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwGUYuvIK7nrCO-h6V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy1e2kWe9tI11blmr14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz2K20x6QMLL_YYyTd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwAVCeyWT59lvKfyPZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxME3_3rYEkgU8_LXt4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"}
]