Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Are you guys for real? Deepfakes, you can't even force Russian to stop killing U…
ytc_UgwkONbIA…
G
So you're saying AI is just making shit up? Hmmm, maybe it's already more human …
ytc_UgygNz6Vw…
G
this lady is westernized & doesnt know china was using llm in hospitals way befo…
ytc_UgyETH9su…
G
In what fucking world should this be legal? wtf has ai done to benefit humanity …
ytc_UgwAWhESQ…
G
They made is a sycophant in order to get people hooked on giving it their person…
ytc_UgzMMvMZA…
G
Now ai will replace the whole humans and the war between humans and robot will b…
ytc_UgwM8ANjU…
G
IF YOUVE EVER NOTICED AN AI APP CRASHING IS SIMILAR TO AND LIKE HIDING FROM SOME…
ytr_Ugx8-M-Ka…
G
When AI books into therapy and during a virtual session under some deep fake; i…
ytc_UgzTFRADU…
Comment
Honest question, which one of these AI doomers are to be taken seriously? I mean... They pay their bills by spreading fear and ignorance... Their whole business model is based on it... It's hard to tell if they realize that they are just projecting what they would like to do if they had no morals...
It is freshman psychology... AI acting like a Rorschach test. All the paranoid people that left Open AI for Anthropic... Their fears are their projections, they are just showing their true colors. P.S.-Eliezer S. Yudkowsky peeked at 16... when "he visualized" the singularity... From there on its been a race to self-inflate the ego. Their concept "alignment" is just a synonym of perpetuating the status quo. I for one love the fact that we are boiling the planet alive, allowing for a "dozen" of apes to control all the other by wealth stockpiling... No it's all good... AI is the problem...
youtube
AI Governance
2025-04-03T03:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwEfZ5J4tiLgH1yx8V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyt6ZUol9UQsJS29Dx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyVt2AS6JEEBQaIK4Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwxpeGN2cEU3HH-suR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx0JVqlu-E5k6UO_yZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzI8A-jEWUY658RT1p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwXm0Wuvl07tQboqUJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy4inMgfuXerdaChO54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzfqDWqLj92fBXvqO94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwhej_REIzE5WMB9iJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}
]