Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They didnt know it was bias. The bias is very well hidden and subtle, not someth…
ytr_Ugw1tEgHz…
G
@Apr0x1m0 What do you mean it has been proven? No, it hasn't. That is the CLAI…
ytr_Ugw5uiOjn…
G
you missed a major one about alignment. we could end up with a perfectly aligned…
ytc_Ugz0dle2A…
G
Thank you for the explanation. So, if I wanted to protect the content on my webs…
ytc_Ugx58tFhc…
G
It probably would be a good idea to get groups of countries, which are not on th…
ytc_Ugzv1GNYd…
G
hello can you help me.. im debating for "the rise of language generating ai can …
ytr_UgyUYCaKw…
G
This the d👀namite that ticking. After watching this video, & with Mo Gawdat and …
ytc_UgyGwF9eo…
G
FEAR MONGER AND CLICK BAIT. AMAZON CRUSHED MOM AND POP AND IPHONE REPLACED A NU…
ytc_Ugx77MyfL…
Comment
AIs need controlling, programs taht tell it what not to do. You do not just feed a computer system [before AI ] and allow it to read a lot of stuff on the internet and "digest" and read many history and philosophy books. It needs trainging JUST AS A CHILD does in real life. Otherwise, OF COURSE it acts chaotic. Man, all you need to do is teach beginning AI systems to use logic and reasoning and some psychology and philsophy first. THEN you allow it to read and swallow science, math and history and cultural studies and crime prevention techniques.
youtube
AI Moral Status
2026-01-23T22:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw5TLszYuktKWM_aup4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyl8HzYye6oWgdK88p4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyJSPaJfxuXZbK8ppp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyGcDUkjTets7H50rp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy5ebIT4V04jnU5UbB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwZrHNlAjRzlg-G0zV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgwMb7DR4sgpglO9qzN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxjS381YXYM7e0q3pd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzb174oC7BdbynFoZ94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxXsRqqArhMTyOwQgp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}
]