Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
>“It’s a big dilemma for us. Of course, we cannot arrest him," Mashatile said…
rdc_jrzz3gk
G
A lot of people believe that but it's not actually even how LLMs work, and it's …
rdc_maknfgu
G
AI just copies from random places online mostly. thats why its easy to detect be…
ytc_UgwcIX7O-…
G
We apologize for any confusion. The name "Sophia" has Greek origins and means wi…
ytr_Ugyrf4evS…
G
nah that ai pic was juat as awesome. I fucking hate people who sniff their own a…
ytc_UgwtJ6DuW…
G
Odd that the major quantum companies haven't already co-opted this announcement.…
rdc_oguip9d
G
At first it was "You & AI".
Then it was "AI & You ".
Now it's "AI".…
ytc_UgykNMhZN…
G
why would anyone be dumb enough to give an in animate object virtual feelings it…
ytc_UgiVw7y25…
Comment
AI can only do what humans program it to do. AI shares the same restrictions in intelligence that humans do. The only way to make advanced AI is to make an AI that can observe, experiment, build models, refine those models with further observation and experiments, and implement working models in real-life scenarios. The first job that AI will successfully replace humans in is, ironically, programming. AI can only surpass human ability, and intelligence, when it can successfully program - and improve - itself. Just like humans can only improve when they become self sufficient, AI can only improve when it becomes self sufficient. We have not yet reached the peak of inflated expectations. I can still imagine a lot of unrealistic expectations that nobody had mentioned, yet.
youtube
AI Responsibility
2026-02-06T04:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyVsTEvusQ2Op36uJd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwyKWE-bZ8C1E2gbQV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzs9evmwpvmBsa0p7F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwMuBdZrMg5bpN65Hd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzfLlho4yvXN0Zw8QZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwWXM1U5N-Zn5WgBNl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwXtI5DT5JqBN_fEVV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxYChet5Oa5X-BII9h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwutY-lBv_AmJEbyNx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy3spewL_ECV_FXueh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]