Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@newwaveinfantry8362 oh yeah because blurry, Smudged artwork with a generic sty…
ytr_UgwHKFf6E…
G
The bill is to protect AI development from brainless jackasses who read too much…
ytc_UgwI3qjzc…
G
He talked about gaining approval to use Stevens approval to use his likeness.
B…
ytc_Ugyj_jKcV…
G
How??? References help LIVING creatures think, AI does not live, it isn't alive.…
ytr_UgyxXrRXe…
G
We give the AI all the knowledge humans possess but cannot use correctly. Then, …
ytc_UgwPS7z4k…
G
I tried using Claude which I often see mentioned as the best AI for coding. It c…
ytc_UgyFew5jk…
G
AI makes my job faster, but it doesn't mean AI can do it independently. The huma…
ytc_UgyW0dXQt…
G
just check at the woman's bin behind her bruh, it’s another evidence of AI 💀…
ytc_Ugz-5zU5u…
Comment
I am not an expert in the slightest and may be missing this point already being made, but I’ll make the point anyway.
It sounds to me like we should shelve the idea of “super intelligence” when we know it’s possible. At least until we understand the real mechanisms behind the workings of our own intelligence and understand how to record and quantify the empathy that we exhibit, and test our tendency so in appropriate situations.
Possibly we could use this to hook a “fair and honest” human brain up and study it doing empathy response threshold tests. Then use those data sets to manage alignment by training the Ai to use meta cognitive motive analysis in solving its training tasks?
youtube
AI Moral Status
2025-10-30T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy_rDPDvKX0pS5HAQJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxDgkTHqZUnDSxzgBR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxSKTYxXrFpO2pfTJp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy_68ocVxj5BqA94xB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgznAbcAKU8mGgjrHeV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwkAbmGeIRBDHciJy94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzi7K2E_nUHh7Ddp354AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwwybZKX2vaD8uehN94AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyr-WOX1d459fZmwj94AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxIwDCQQsrqbymjxo94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]