Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
> drag out executives of the major corporations contributing to climate chang…
rdc_eh669fz
G
You bring up a fascinating point about consciousness! While AI like Sophia can p…
ytr_Ugz1IJX_U…
G
People seeing actual art: nice shading! Cool facial expressions! The setting is …
ytc_Ugz1vo14u…
G
Anyone notice how the fiction(movies, books, music etc.) of our past is becoming…
ytc_UggPxLAiN…
G
They shouldn't be allowed to force this creepy shit on humanity period he's talk…
ytc_UgxvXJWK1…
G
Ngl I only use ai for shits and giggles like making stupid images on chat gpt or…
ytc_UgzAjp-Fu…
G
There are still plenty of jobs for picker and packers, despite it could be done …
ytc_UgwRscri7…
G
This is an incredible suggestion! 🥰 actually, I play with an AI chat app called …
ytc_UgzWRfwG2…
Comment
The problem I see is that even if we are able to align an AI with human needs and requirements that is not enough because we would have to ensure that every AI was fully aligned. And that may turn out to be impossible for geopolitical reasons. I can see the industrial military complex of many countries being happy to create AI's that are not aligned with the needs of all humans.
youtube
AI Moral Status
2025-10-10T13:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxGQUmIpLeFlVIDfUt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgypjQqT3e-Zz3saSR14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzAtvqbmQte7oFZz7Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyveMc7u-7ne9DBptZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw_Kf8CwqOhrrLwZnh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxfruevPX7EW-ohjCt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxVUIvqOJ-FpeC_iV54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwr6eTh2pBwRoE1BxJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyZhBD0hBQfAb5GL4p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgynphV8hjpJa61EQld4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]