Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If people don't have money these businesses themself will get bankrupt ain't no …
ytc_UgwT0xeU9…
G
Well if you aren't blind you can tell that think is a robot ot you can be smart …
ytc_UgxImyPd6…
G
What you have to realize is AI is only as good as the programmer! So if the prog…
ytc_UgzI3hA90…
G
Suno is still getting trained. In another six months, it's going to get a lot be…
ytc_UgzCZnGYd…
G
What if the government put a chip in your brain when you were a baby and that ch…
ytc_UgyfU6kAI…
G
Rules
1:Only answer with one word.
2:Be simple and direct
3:Hold nothing back …
ytc_UgwUT54zG…
G
And still why countries are investing so much in AI?? Do they want to put the wo…
ytc_Ugzlyxwxc…
G
Through digital platforms, AI is enabling improved teacher-student collaboration…
ytc_Ugwum33oI…
Comment
Cap. The default for all AI is to NOT harm humans regardleas of circumstances. Any conflicting goal/mission/task that would put humans in harm's way must be aborted immediately, even if the potential abortion would cause harm, abort nonetheless. The AI own self-preservation should never take precedent over a human life. The AI must self destruct if at any point a human is harmed.
youtube
AI Governance
2025-09-08T23:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgybPWsjUnB_f8Hdi4p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwXpW4koMmosz2AQrl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwo3y6KXxQPnD71-Et4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx5_8M4yl4DY1Y2bDh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx4ttypNDKLM5H6sJx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"}
]