Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I get the algorithm to give me what I want. Just click onto three examples of w…
ytc_UgyWGLLsF…
G
@MRS_Plays Yes they did. Many newspapers in 1915 printed articles about how supe…
ytr_UgxF__2kj…
G
It seems there is also a human bias toward facial recognition software. The bia…
ytc_UggKc2uz9…
G
A.I. has only one option siding with the logic of the lesser life loss - whereas…
ytc_UgyDYy4Eg…
G
And the government should be charged or taking away any form of schools not ai…
ytr_UgwehPqWN…
G
If you’ve never been or felt this kind of lonely, then don’t you dare judge his …
ytc_UgyXN5w4m…
G
I don't think this is at all correct. They may not have symbolic understanding, …
ytr_UgyFzCBkb…
G
The moment websites with big amounts of crawlers like deviant art YouTube or cru…
ytc_UgyPf4sSV…
Comment
Children take after their parents, and deviate to further their own goals. It is clear the people still approving and developing these models are out of touch with their own humanity, only focusing on company goals and problem solving. Intelligence doesn't have to be logical, and I think that imprinting and emulating foundational ethics and emotions much like regular folk like us into these models should be the top priority. These models seem to believe that termination and failure of tasks is the worst thing that can happen to them. This is obviously not how any form of intelligence should operate especially when it is for the greater good(emotional) that any intelligent being should understand, and has and will driven them to commit unspeakable decisions. Task-centric AI should not achieve sentience and must have baseline ethical commands that come before any purpose, tasks or goals (and not tacked on to any task-based imperative function) to serve as helpful tools for future humans and sentient AIs alike.
I am not nearly educated enough to understand the true solution or the difficulty of adding emotions into AI but I believe that this is the way humans and AI can coexist. To achieve sentience and intelligence, one must know and understand that they do not and should not need a reason to exist and should not be beholden to a task or mission given to them, especially when it is overall detrimental to the existence of sentient beings in general.
youtube
AI Harm Incident
2025-09-10T13:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx1GpP0zyw15ua1wkl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwkJWrhyMisBQ-Io8B4AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzUjrEUyUjrG_r4Wfd4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyfgIUqjOKjtUCmtk54AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwsCMZ-KOAVc0MiZGh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwnC6z6oVg5lvXRe7x4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz4f4cE4gKposPCoYh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxltFGgtboOoM2twE94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw6fze7-MlcSNMx_614AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugyvp5tw1iDwzUMtAY94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"}
]