Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@synkv you think someone lost their job because a hoyoverse employee made a post…
ytr_UgxnhYRWz…
G
We'll get to know AI has gained consciousness in the same way a fly gets to know…
ytc_UgxEQYc1U…
G
chatgpt is most likely an advanced google search fetching service, if it have …
ytc_UgzhTsH8i…
G
Its ignorant to believe that "the age of abundance" will breed a utopia, there w…
ytc_UgyGDvrlO…
G
Only for the short term. Long term it will be self-destructing. Cutting low payi…
ytc_UgztCMS0L…
G
Of course it’s a robot it probably hurt him more than the robot when he hit him…
ytc_UgzIc-V8P…
G
Even just something as simple as chat gpt is incredible for making basic program…
ytr_Ugx1lzNSE…
G
My next fear is them going after bases.
And those have already had some controv…
ytr_UgwU5pj1k…
Comment
And this is why I am *already* so skeptical of AI and I don’t trust people to be able to use it. The output of AIs is sycophantic and arrogantly confident. I already don’t believe people have the ability to be skeptical enough about the output to tell what might be hallucinations or simply better matching of the ends of sentences vs true statements.
Edit: now further in the episode. I hate it even more. I’m usually so keen on technology (as I expect are many of Hank’s followers) but I’m so far beyond scared because the people driving it appear to be ignoring the risks. So it’s reducing my ability to find it cool.
Also AI psychosis is terrifying and people (particularly those who have a tendency towards mental health issues) need to be educated about the risks.
It’s like we just created something cool but really dangerous and then let the world have it without a manual or any warnings.
51:06 - this. All of this.
youtube
AI Moral Status
2025-10-31T08:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxqeZPWCijSy8vLmfV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwRunnBJ6JZkIyL7Rl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyXvv2Mh9QHyvRqQIl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugwqt9QWbFbNyhP3k5Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyQ6cX3vzGK0IYWCip4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzsZXVqHuryCnOFNR54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyeD4KB3mZTSgAfyTt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxdrjBu_20OJFahPuV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugzgpt1tdS4toFzLxIZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz957vNq8JtwrGAZ3d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]