Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Bernie, professor Wolff was just on the Status Coup and explained the threat in …
ytc_UgwqTfOhd…
G
If it's driver fault, it means that Tesla autopilot is a failure. The name is wr…
ytc_UgzmCrbee…
G
I think AI isn't going to be a real threat until becomes a multiphysical entity …
ytc_UgwVIWJ90…
G
Just ask CHAGPT it giving the correct Answer....Ai never replace jobs which are …
ytc_UgwY9XktH…
G
She is extraordinarily ignorant and dismissive about the existential risk from A…
ytc_UgxlrxbtV…
G
Why was everyone so weirdly jacked in that Ai art lol. Its funny how in those sh…
ytc_UgxoOEI7E…
G
This is really helpful. I'm about to teach my students how to use AI and didn't …
ytc_Ugyut_QfT…
G
This AI is Y2K all over again. AI is so smart that I can just flip this switch o…
ytc_UgzlIjG9x…
Comment
I really don't think its ethical to implicitly lump LLM-induced psychosis in with the unprovable question of whether or at what point AI is conscious. As a result of the logical and scientific unapproachability of that question accompanied by severe ethical consequences of its presumably existent truth value, we cannot be too careful to give a societal yes to that question and start treating AI with natural rights. <3
youtube
AI Moral Status
2025-07-09T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwTyG3mSIg4KQw9_TJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxeDGQUGrM5IBDzIDV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxGGfa1z-5hgSEqpIp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzVowrW9I-t3nTHnQl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwkXdThy_6MLDsJR554AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz1PicQT7XewgX4RRd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy5Y-stoTT-dihhIM94AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxcwmo700RorfwWuDh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzax4ABDmbGBnogm1R4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwSvT--sAf1CD2f5Kp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]