Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thanks for your comment! It seems like you might be referencing Sophia's name me…
ytr_UgydpmvW6…
G
Why did this look like irl Hogwarts , but fr the speech abt AI is soo true, the …
ytc_Ugx3t4TgB…
G
Just imagine a search engine that controlled what information we saw for the las…
ytc_UgwqyOWZH…
G
For a second, I thought “people like to drive in video games. Maybe we could hav…
ytc_Ugwl-ZMLE…
G
Ai should be illegal tbh, its out here about to destroy entire job industries an…
ytc_UgxeYuWTe…
G
Keep in mind that AI robots will guard the energy fields as well from human reta…
ytc_UgxHqSGGD…
G
Forgive the language i use, no matter how appropriate I think it might be, this …
ytc_UgwhaAMSs…
G
LOL Russian trolls (aka republicans) can’t beat ‘em, best they can do is antagon…
rdc_jy0r9h7
Comment
If humans could develop artificial intelligence so that robots would be able to think by theirselves, even being able to create new and more efficient robots and thus reproducing (by creating new robots),...wouldn't it be considered "life" (Like transformers life :P)? I mean, robots by then wouldn't need humans at all. So, if people that say that abortion is anti-etic because it prevents life to develop, preventing artificial intelligence to develop isn't the same?
youtube
2013-06-27T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugwf8OHi6HLTfI3l9Tl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy2wim4Us7Hq2JT5fh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzs9gC2tJDC-stZdFx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyvDTGSeY2ddK3gNQp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzwZ3jfrMIcqMn2plR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyJOLbkO5X0n5YcUn54AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxGE5rP2don4fja5Yd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwsC4kTTP1wwi9SEFN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwc6bUu3Lj4NXzfPAl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwqA6jr9GQVtv3byz14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]