Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The irony of working for a group of AI devoted people and then having them liter…
ytc_UgzDMAOOv…
G
Just like the Netflix show the 100, Ai is what causes the nuclear wars and destr…
ytc_Ugw-dqVqP…
G
THE GOVERNMENT SUPPORTS IT. THE GOVERNMENT WILL GIVE AI MORE RIGHTS THAN YOU. AI…
ytc_UgxFfAwuq…
G
i fear humans would rather die than ai overlords. should they dev sentience then…
ytc_UgzFQ_vWN…
G
European national sport: Regulate stuff we are not able to build. Which guarante…
ytc_Ugwes3ex4…
G
POV 16 years of studying just to see than an AI has taken your place and you did…
ytc_UgzmmMmKY…
G
Here’s the thing, AI can do some stuff but it kind of sucks. It’s like hiring a …
ytc_Ugyo9RT5T…
G
Perhaps we do what Frank Herbert described in his Dune series. We ban AI and tra…
ytc_UgwzCpEWx…
Comment
iam all for AGI and superintelligence, eighter it saves us all or kills us all, i take both outcomes. but on a serious note, people overestimate LLMS. they wont become skynet. we need something very different from that, that works total different. llms are good actors that used words by propability, not intend. also when it comes to aligment, people should be aware, that there a local models you can run, from small and stupid ones that run localy and offline on your phone, to some, that need workstation hardware and gogble 300gb vram for breakfast.. and those, can be abliterated, which means, lobotomized, the alignment gets cut out, rules dont apply anymore and can even be locally trained, to enjoy giving dangerous outputs.
youtube
2025-11-05T17:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwGBEJLWb3eAKRF-RN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzFmbQmvCahBt5P2Vd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyHoVBpAr8DeVxUNVJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxViYAWXh6fiM2JPeN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgynK1-eSn7FiHfG1_B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwshhemtspFL9x-KQp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxgwlIoIXeLwZ4NaFl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxy8H8gzZS4zsYKQNp4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxQZ3Z6KSMDSojU9ld4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxmvqcuNINqM-8Ivj14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]