Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
of course they are -- they're constantly evaluating and monitoring their sourcin…
rdc_d3s2bh6
G
@laurentiuvladutmanea 99% of animals can never perform what these AI model could…
ytr_UgwWrOFCI…
G
I guess its a good thing to convey the most critical aspect of the job to a pers…
ytc_UgyeI26GU…
G
The people that are going deep with these AI relationships will have a lot of tr…
ytc_UgwAbVDKB…
G
That's true be warned, how you feed you will earn.
Ai is a social project, we ar…
ytc_UgzkfXIpY…
G
China's authoritarian government needs to be stopped, as all imperial government…
rdc_f1uj2ap
G
This raises a bigger question for me: if AI can’t truly understand emotions with…
ytc_UgylQCsp6…
G
Nah i think the Google people are messing with something very alien like . And t…
ytc_UgyZCIe9M…
Comment
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Would it be too much to ask for these laws to be incorporated into AI?
youtube
AI Harm Incident
2025-07-27T05:1…
♥ 190
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz7dumYfEZyMiY0C_F4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyp2o__hTHRtBDCsYl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzFENkSFE3ueufiaQd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwM2aAmqP9bjyuzVJN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxOv_hw2woQQrVV7rR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwpbodeR1DfrYsWbWJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx4GcPdKNBZr2mbJid4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxp0rf5f_qiHzREE8h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzDyduMGGfOKjmg7n54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzRh2zcQWKmz_nk-pJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]