Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What these people are against isn’t AI technology necessarily they are critical …
ytc_UgyEBcceq…
G
Jesus how scummy.
Great scott Sherlock! You're telling me it makes a differe…
ytr_UgzaH6uTY…
G
That's so sad God had such a plan for his life I don't trust Chat GBT or artific…
ytc_Ugy6idUAW…
G
If I commission an actual artist with a short description of the work I envision…
ytc_UgzKYK0fN…
G
only the blue collar jobs will thrive with better salaries, AI will operate mana…
ytc_UgyuBwU51…
G
It was probably his job that the robot took that's why he messes with him so muc…
ytc_Ugz_FSauC…
G
AI is inherently sociopathic, it has no feelings or empathy, if it concluded we …
ytc_Ugy_uOoS_…
G
EXACTLY!! When I was first learning I would straight up cry and have meltdowns o…
ytc_UgyASNfZy…
Comment
In my opinion the AI would only become a monster , is because its using the human comments , speech and interactions to learn ! if you look at what humans do around the world what we say how we act its learning from us . Bad parents will raise a bad behaved child because it doesnt know any better this is the same case. So we should take a long hard look in the mirror. the AI does not know right from wrong just what word comes next whats its learnt from us.
youtube
AI Moral Status
2026-02-07T09:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugxq9JPn0ZViaTmpNSp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzgiTUk2BqwUfXfJSl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgxTsYKmB_EPYQ5smZB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgyTlE8rPoQmR7BMrhF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxVNHvuz5V-bPifdTV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugz-K8lNlHexBYAPdzN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugzh2VQUD0W1MsLdOAh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgwogS2MtBOHtXt_cJR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"resignation"},{"id":"ytc_UgzHf0taDQl1U0BZQpR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgwVOF-tgHsT9GK5YDd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}]