Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai people don’t seem to understand why people are mad at them because they sell …
ytr_UgybvIowu…
G
Oh yeah?
Cyrus A. Parsa, FOUNDER OF THE AI ORGANIZATION & former ceo of Twitte…
ytr_UgzUPSmPM…
G
I love how we are talking about all getting killed off by AI, no way to stop the…
ytc_Ugxy8PH3E…
G
I am an INTP ( i.e. a real inventor) . The AI's I have encountered are just not…
ytc_Ugzec5iOW…
G
So, somehow picking mostly male candidates from a pool of mostly male applicants…
rdc_e7jrpn7
G
lol
AI(s) as in Algorithms -- trained by human behavior.. NLMs trained on HUMAN…
ytc_UgwbysMCn…
G
Who is in control 🤔... they where... they where okay when they could program AI …
ytc_UgwM7TtaQ…
G
If AI truly takes off that much, the economy and job market must adapt, and fast…
ytc_UgxGU7--H…
Comment
My stance is that a super ai isn't going to be an accident, but negligence. The structures required to make an AI that can work in the ways needed to truly surpass humans are specific design goals that would easily recognized as a bad idea. Human function is more than just algorithms and so as long as the AI is an algorithm we are mostly safe.
youtube
AI Moral Status
2025-11-04T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzpJOJ5oHMJIZmgBL94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwnHk75fLrwbk95GTN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx4gimSeo580EIZhj14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgyAHhduq9mOAAt_mXB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyYH8M0j7512fDUwSd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyXYkRldTR9sh5kDHV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy492lhBdoP0viiX1x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"disapproval"},
{"id":"ytc_Ugw999Q8W5OZ6vjczSd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyuBcanRzedEgojXSl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxKgAwmaN93UQfXVH54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]