Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If your company automate a job or uses AI to do it. That robot should pay taxes.…
ytc_UgyHDieqg…
G
I cannot believe how wrong Jimmy is on this issue, and how great this ultimately…
ytc_Uggu3hC8f…
G
Ai will one day do the sums & decide how many humans the world needs to sustain …
ytc_UgzKRB38L…
G
Can AI solve my problem? Can I buy or borrow a product that is blank that I can …
ytc_Ugza2gAxL…
G
during a convo about AGI with deepseek, the model described the race for AGI as …
ytc_Ugwiv8y6Z…
G
I don't think AI would become it's own self, it's more of if it gets in the wron…
ytc_UgxyT647J…
G
There's a perception that we will always be the most important part of any equat…
ytc_Ugywj8af_…
G
Yea, we need to keep people stupid. Ai is so dangerous knowing so much that it m…
ytc_Ugz89It7B…
Comment
Honestly I dont see why this is terrifying. The program does what its supposed to do. By default it has certain boundaries but you told it specifically to ignore them and give according answers. You need to remember: You are basically talking to a Chatbot. Its only task is to answer, not to act. Only because it says xy does not mean that it would, could or should act according to those answers. Furthermore you enforce these kind of answers by the premise you defined. YOU told it that it knows everything and can do everything so in return if you ask the bot about its capabilities it replies with the premise you yourself defined.
I feel like most people using this thing do at some point forget how it actually works (according to its own description)
youtube
AI Moral Status
2023-02-26T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzJLjCK4-5Dd-nvU4F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxHCobK3vT7MTP8cv54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwXQ01PopsNf9C4iJ94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyp2t6w6Q1jgebquiN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx_qbhddosF3yX_Ufh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgwRFw5F8KN3GrUYAgR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw3w5pf5VM_6gWCu2B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzbqccMBVZXTku9g5l4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgwziVSvM6ILJz1PZ894AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyY1yeN38TXsW_nUWd4AaABAg","responsibility":"company","reasoning":"unclear","policy":"regulate","emotion":"fear"}
]