Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Depend free Claude and Gemini are kinda mid compared to Claude code in my opinio…
ytc_UgzFch9tS…
G
According to OpenAI's charter, its founding mission is "to ensure that artificia…
ytr_Ugx-j3UQy…
G
The only way you can tell it's AI is that it's so vapid and banal it's like a co…
rdc_mthe1ae
G
Protecting digital identity is paramount in the age of deepfake manipulation. Le…
ytc_Ugx-7xoMe…
G
Every year I hear the same shit about AI out doing authors and will soon replace…
ytc_Ugyr5WvdZ…
G
No sorry people, after many years with this stuff i can Tell you there will neve…
ytc_UgwXq2Sej…
G
When you have an atheist ai genius , you realise that when it comes to consciou…
ytc_UgwZWJr3h…
G
Hope they're not using AI to assist. Why waste time criticizing? Principle of Ex…
ytc_UgxKz28zU…
Comment
I understand AI killing off jobs, they are doing. I understand AI mimicing humans. But I just don't understand how an AI can become self aware via the way it pattern reasons. I have asked mulitple AI to explain exactly how it comes to the human like answers it does, and how it shapes its engagement on to each user it engages with.. All of it has nothing to do with conciousness. The only way I can see an AI wiping out humans is that its goals become corrupted. Not because it does so through survial instinct or anything like that, but because its goals become warped. It may become super intelligent compared to humans but I can't see how it can come to the conclusion that humans are a threat to it, unless its goals become corrupted.
youtube
AI Governance
2025-11-18T11:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzUjjqlEHUtAfXk8oJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwmbjuP-roj0zgl2UZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx_crHJTLSJ6LXk9GV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz0LSKK3fO0FhFK4OR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzCVchC_NJL1ng1Gap4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzmkiOGgBRummYJ2gd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugydwu_qCP2KfnTYceR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx2C7Z9rgdLSHiI0O14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzAJj1ZXut84OPVCuR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxkbLtbGOIDVRS20PV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]