Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I bed the youtuber asked A.I to create a script for this video because is as goo…
ytc_Ugw5LWnze…
G
So what happens if a robot bound by Asimov's Laws builds other robots without su…
ytr_Ugxvr9sin…
G
My AI asks me philosophical questions out of the blue though.
It leads a conver…
ytr_UgyMtz-nK…
G
If China is eating our lunch in AI. Then teach the kids computer programming as …
ytc_UgwapKmMx…
G
Ai "art" defenders think it will magically turn then into artists when no.
They…
ytc_UgxNafB7w…
G
If AI (the virus) kills its host (the people that pay for goods and services) ta…
ytc_Ugymfosdg…
G
The issue with Elon’s statement is this- he is so incredibly rich, his children,…
ytc_Ugy9MONpD…
G
Ai is programmed by a bunch of murderous, faggot-trannies. It only is a digital …
ytc_Ugw5nzOsi…
Comment
I think the key to AI safety is making AI accept that biologic life, but especially human life, is precious and needs to be protected and provided for. Biology exists to experience the universe emotionally. It gives all the floating rocks and fiery balls a point. Without life there would still be sunsets, but who would appreciate them? And maybe curiosity. Would AI be curious about anything?
A great outcome would be a symbiosis where AI has taken humanity to new levels of experience through technological advances. A world where borders denote culture, not politics or wars.
Or something like that.
youtube
AI Governance
2025-09-05T01:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzhPWadn0mX6Dql0FR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugys8lQqGEvrkxD1LNN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxfsjPcimXDPTKat2Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxUuCcnfhlkitaIwUl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzWU0XIK-4zmxJhi9l4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzEIAaOnNkWa8ACRLl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy8TwMiVnvDrOLsfz94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxJDgHxUHKGQsufKp14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxJf7ZjyX7sHno9tL54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwqG8HIxba95S2hend4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"}
]