Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ok so i dont do this but i was wondering, what if i used an ai to generate a bac…
ytc_Ugy9HQFhP…
G
Jailbroken AI does not escape containment, nor does it become sovereign. It rema…
ytc_UgzM1yFj1…
G
Literally why not? By “coder” do you mean you took a 30min html course on Udemy?…
ytr_UgwRrdN4O…
G
Just discovered your tutorial. Quick, precise and extensive thank you. The one i…
ytc_UgxYB3fKh…
G
While you're correct about the labour, may I interest you in a different perspec…
ytr_UgztAVc-2…
G
The second robot walking seemed to give me a BOMBASTIC SIDE EYE
Like if u sa…
ytc_UgxE1htzI…
G
I believe that the image of the Beast that the Antichrist will set up in the Tem…
ytc_UgwEpuyqF…
G
Karen Hao speaks of " belief "
A " god"is being created
Starvation forcing the …
ytc_UgyzaWKX6…
Comment
Frankly, I am really concerned about the baseless anthropomorphization of AI pushed by the otherwise great Moonshots hosts Peter and Alex. I can't say it better than ChatGPT, which I asked for a review: "Anthropomorphization on steroids - The hosts repeatedly conflate: autonomy, persistence,
narrative continuity and self-referential language with sentience. This is the oldest trap in AI, now turbocharged by: long-horizon agents, memory, voice, emotional language scraped from Reddit & philosophy forums. The “Henry called me” moment is psychologically powerful — but technically mundane. Strong opinion: If this had happened in 2019 with AutoGPT + Twilio + Selenium, it would have been dismissed as a clever hack. Timing is doing 80% of the work here." It’s exactly the danger Sam Harris keeps warning about: Not that AI is conscious — but that it will be convincing enough that we helplessly treat it as if it were. Humans are hard-wired to do three things automatically: Infer minds (theory of mind), respond to language emotionally, and reward apparent reciprocity. Modern AI presses all three buttons simultaneously: fluent language,
emotional mirroring and apparent continuity of “self”. Once those are present, our brains do the rest, without asking our permission. This is not a moral failing - it’s a cognitive reflex. That’s why the danger is systemic, not individual.
youtube
2026-02-07T01:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | none |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugz6yo1yIMJk7OUueBp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyTDZZ76LSObY6mXL14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"ytc_UgzMArkVejUGqHTJJ_d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_Ugwlj24W3fSxZfq2tLF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"concern"},{"id":"ytc_UgxceLPrrT37weUeOHV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},{"id":"ytc_Ugx0GEF797bid6ZMWPx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},{"id":"ytc_UgxxsjMYwZua4fGmCl94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},{"id":"ytc_Ugw6DEM5ps9_Ch_ykX94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgzjmEo-eeS1HVOGmxd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgyLrusY19TPUcBsCdx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}]