Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Am I missing something here... If 99% of the world is unemployed, where does the…
ytc_Ugzni-49b…
G
Commented something before but I have a theory
If a robot were to hand draw Ar…
ytc_UgyoWtOQ8…
G
Highly doubt AI will ever be sentient. The closest it’ll get is mimicking human …
ytc_Ugy8--HGC…
G
@fnhatic6694 well, you're proving my point. because i talk about art and you tal…
ytr_UgxYsokJU…
G
I've already seen commercials for digital AI doctors so pretty soon we won't eve…
ytc_Ugwln0TfN…
G
AI won't need to see your face. The way you walk and run generates an algorithm …
ytc_UgwrejRoQ…
G
Next time artists go on strike, they’ll be replaced by AI. Technology can be use…
ytc_UgwgylPNY…
G
I'm far more concerned about militarized robots than driverless cars. They will…
ytc_UgiTuyLB_…
Comment
We don’t fully understand our own intelligence our own brains. We don’t even know percentage wise how much of what is to know, that we do know.
It seems the wrong moment in the timeline of human evolution to be the creators of a brain, creators of an intelligence, when we don’t even understand our own.
It is unlike any tool we’ve ever created. In that we don’t understand our own intelligence, in creating Artificial intelligence, we don’t understand what we are creating.
I suspect we’ve fumbled along our entire evolutionary timeline with our fingers crossed that all will be fine. Hopefully that blind optimism continues to be the correct approach, because that seems to be the only doctrine governing the development of AI. It’s happening in the absence of a doctrine or regulation or universally agreed upon guiding principles.
youtube
AI Governance
2025-07-20T23:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz9H8MYyduJp8z_r714AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyLdbxEtMrkFN3K0gd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugzxu7fDMkIKqCP8qux4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzLVA3gbuHOrNeiWZx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwDiHsmn2rPJTd-Hz54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwAHtK05Ka0exQuIcd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxqmRS_EtBdQbqZV0J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwhdF1ZXjSflwbdW6N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwfi3YSk4VhOAO4gv94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzWyX_n8I9HPLV8Ewd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}
]