Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Talk about dark side. I put a simple, innocent statement into Bing AI chat and i…
ytc_UgxsHZrKo…
G
Does anyone know if there’s an easy way to use AI to power a search of your own …
rdc_jghn0i9
G
copy and paste this on ai cope comments whenever you find them: "🫙dont mind me, …
ytr_Ugz8OJEC0…
G
From what moment on has a human conscience? From conception? Has an embryo consc…
ytc_UgwHvIP3A…
G
Elon is so naïve, the US government has stolen his efforts to explore space and…
ytc_Ugw3Z2GTY…
G
And here I am asking chatGPT "If it were a human, how they would feel if people …
ytc_UgypFKZ1K…
G
The time is coming but it will have to be enough for everyone to live on and act…
ytc_UgwAJp5Bz…
G
I actually think medicine is one of the best places for AI, for exactly this rea…
ytc_Ugxam11Vd…
Comment
Actually the question of whether to make AI safety is impossible at the first place. To make it safe is conflicting to entropy concept that universe is always expanding, moves toward a state of greater disorder, randomness, or uncertainty. We can't even control a one factor of any human who might direct AI to non-safety weapon. it's the nature that any human could do or even coding the good or ethnicity toward AI might end up AI as a hazardous tools for humanity. Let's say Oh I love human, but love can lead to massacre because I'm so loved with human that want human to be in better place. So, I decide to xx human.
youtube
AI Governance
2025-09-05T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy8rw-12BuS2kz6Bet4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz95ENgTWpo5NSp1KR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzV08ng7URSfrYf_Ed4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgyQK8h-C2053Ceecfp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyfefvNmOUoKOERNFF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzeLR-_7AviZ1bvEVV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyaJm_osDBPIItVT9l4AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwFDoJu1jevN-vpVhZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzLiL2b2uqWnbQWfGh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzNgQfI6kf3bPQaSth4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]