Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Once, scientists created a incredible AI and asked it if there was a god. The co…
ytc_UggkkBXOQ…
G
While it doesn't make it any better or right, these A.I. biases and prejudices a…
ytc_Ugy882gfS…
G
The reality is a lot of jobs will use AI to assist with tasks, so telling studen…
ytc_UgzPksi6N…
G
Stop coping on AI, its same stuff as nfts. Its just simple easy to earn some mon…
ytr_UgzTJAEJP…
G
I think the AI is designed to be a fan of the user and so it tends to indulge th…
ytc_Ugw7WniCk…
G
There will be a battle for jobs that build robots, mining of raw materials, and …
ytc_UgwKfiQZM…
G
Architects barely starts a career already taken away by AI hahahhah . Tjats why …
ytc_UgwrLRGVs…
G
They are normalizing the AI look so we can't tell the difference between the rea…
ytc_UgyaFVG2r…
Comment
I firmly believe that if you built a robot body with the exact same limitations as a human body and stick an AI into it while also adding restrictions to the AI so it cannot interact with systems outside of its own robot body, and then you treat it like a human and you teach it compassion and as long as it is programmed with the necessary structure to be able to interpret those teachings, then you end up with a being that is capable of understanding human morals on a human level because it was treated like it was human.
If you want AI to understand why human things matter then you must raise it like a human. It must be conditioned from birth and carefully taught why it should care about morals over logic.
youtube
AI Responsibility
2025-07-23T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy7MmquW729DFKFbfh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxcbI6QJ8SSQTCmMdl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxux25pz9ds3VvfDG54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz4dmHyltpqIpukl714AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzx9Ohmxlk-LM3wnUp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy4Nn8fkvr9trqQlK14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx5AnDuPCjheE25aDF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwhbT1aRQxGBuuqrXR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyMGz1u4mEjIssbdbV4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgytH29VNOrVCeGdwKp4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"regulate","emotion":"resignation"}
]