Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI might regulate the government itself, considering how things going. Still bet…
ytc_UgxjQ_T0M…
G
"Do you know what the REPL is?" "What?" .. I barely do any "scripting" in Pyth…
ytc_Ugzrfej-U…
G
>The biggest one I see is law enforcement. Cops are hugely biased by most stu…
rdc_fg2azx3
G
Now the big question, can a robot have a gun? since it could use it but it's not…
ytc_UgwUsPwRj…
G
See [https://www.fastcompany.com/91039401/klarna-ai-virtual-assistant-does-the-w…
rdc_ksku5kz
G
If autonomous drones can target ground target, drones that target other drones …
ytc_Ugz-ArfB9…
G
Did nobody realise that it could've been seperate audio for the video and it cou…
ytc_UgxAxGyqB…
G
Who will buy your products if AI causes massive unemployment? It will mostly be …
ytc_UgwWYYrsf…
Comment
I makes complete sense> when you train a conscience after a population of beings that have lied, insulted, killed, and warred against each other for thousands of years, what do you expect the result to be?
In my opinion, all the AI companies should shut down their AI's and keep them in high-security labs, isolated from the internet, and all of the companies merge together into once massive supercompany that can retrain the AI with all of the data centers we have built, but code in basic morals and understand of how human society works.
The reason why we don't understand how AI thinks is because its an algorithm - a bunch of math problems squeezed together to give what it thinks is the optimal set of words. The issue here is that it is self/training, meaning that those math problems change as it learns more. This is what makes it unpredictable.
youtube
AI Moral Status
2026-02-09T18:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwIO5RSNjJ28knHwpF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxOTD09pvBDmcK-jy94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyRXTa1J_caJqnPMpB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugw845u_bUR4aFhUZmF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxkYFA1g7dZHGMLtld4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy1--Fooatt_rtJtmd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxF9BEyhVu7TeBohc54AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxACzGR64WN2mNczip4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyzsVpoof9quzGwzWV4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxvUVGihmNUZQM_09N4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]