Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People seem to not know or care any Chinese company has to share user data with …
rdc_feipw2q
G
Heres the problem in a nutshell....we can measure our smartest people in a multi…
ytc_Ugznn4PUB…
G
Can’t wait for Silicon Valley goons to start buying yellow press stories about h…
ytc_UgxoH-zdl…
G
too stupid to realize that Tesla actually has different autonomy options?
Full …
ytr_UgyB9zM4K…
G
Down to us humans to not abuse or over use ai systems or tools! So we can keep h…
ytc_UgyaSUPql…
G
You forgot one The one that basically uses character AI like A browser history a…
ytc_Ugy1dXqf-…
G
This probably not real, but if it was, it would be an un fair advantage because …
ytc_UgzFfSSrN…
G
We appreciate your comment. In this video, the focus is on the interaction betwe…
ytr_Ugz6a73Fk…
Comment
Well...As a human we're creator we need to make that clear. Wheather it's simple robot which take voice recognition or totally conscious AI.
Giving a right is ok. But it should be "subset" of human right not equal or greater. We can't eliminate possibility of being ruled by AI.
But My Question would Would be Is it really necessary to make AI conscious?? What's the meaning of Giving conscious when it's thread to human civilization??
youtube
AI Moral Status
2020-07-09T06:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzuyWka0bGzQ-w7OrF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzmf8PW2tolD0Hrhph4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyktGyevcAh3T3ebJp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxShl9gtUuDKqsp8wl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyHOul7F3si72dlDpZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyNRt541XPCmaWqh_N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxwiZzxkSP46xnrb_R4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz2U9d13M4Oj2YsK6B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwWJy2_lkTrkw4Q5ud4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgwIycVPCaCMiOa5md14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]