Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Watch some Geoffrey Hinton interviews(Godfather of AI) He will make you scared t…
ytc_UgwnFKrRi…
G
Turns out people find meaning in work and will continue to do so regardless of A…
ytc_UgyF1JMw6…
G
I think there are actual good use cases for AI in academia, like for checking po…
ytc_UgxDznxTk…
G
I asked my AI to watch this video
Nova said this (running on Haiku and Sonnet):…
ytc_Ugzen5wO9…
G
Because if you put "AI" in a headline then you get.more clicks.
The problem rea…
rdc_n8m7gr2
G
@zip10031You see, we respect people who create in this household. Generative ar…
ytr_UgzfCZrNX…
G
On arrête pas le développement. Que cela continue. Pas de pause. Fallait y pense…
ytc_UgwHH48th…
G
By the way AI has never been wrong before. This program will be able to profile …
ytc_UgwvdR7i0…
Comment
After listening to an interview with an AI hacker, I'm pretty sure this is selectively picked studies being pushed under a certain agenda. If a hacker that successfully manages to break every AI's safeguard doesn't find all this insanity despite the AI version, then it sounds like an agenda. I'd believe a hacker over any organization. Especially when there's money involved.
Edit: grammar
youtube
AI Moral Status
2025-12-16T08:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyShtfUS21rxSqkaeN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugya413nnaKIHWlFo_V4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwvxrmpUlCH2wJLRU54AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxyhwM04rhm8-cUo_l4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw7dBxhcZWJ_tkyZPN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgynzXoErDPuphe5UKZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzkK5u9ETW2qUPV2it4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgytoNt93eSRdYCqD2R4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz02YS04VoozQcxMtl4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugypsa_RyBFxn1JCEPB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}
]