Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am an animator and character designer. My family and friends are constantly re…
ytc_UgwgVh8b3…
G
I think the AI doomsayers are missing the real danger.
Machines, even very intel…
ytc_Ugyu3c4XV…
G
It's not that -- it's that this is a novel issue with (possibly) no legal recour…
ytr_UgwfHbGRT…
G
5 million worldwide is developed countries... Automated cars replacing taxis and…
rdc_cz2r0ps
G
That long pause is basically him saying, “well there’s a 99% chance your kid is …
ytc_UgzeAcm8M…
G
I'm going to take the reason those people were fired with a tiny grain of salt. …
ytc_UgxZiDQbY…
G
There's nothing wrong with ai as long as you label it as such. I do work in a va…
ytc_UgwLjl2M0…
G
the ai companies literally stole the artists works calling nightshade abuse is w…
ytc_Ugykzb0wD…
Comment
Technological optimists ignore the negative consequences of new technologies. How do we ensure that robots are not used for evil? At the same time, they will be valuable for us, to save labour. But the "AI mindcloud" seems dangerous for us, as several SF movies illustrate. Most engineers do not have ethics training. They create new possibilities only because they can, without full consideration of possible consequences.
youtube
AI Moral Status
2019-10-22T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgweBe8XTNidctDjyGR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxU7rAiaRJInY4j-OJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzWL6Dz1jncS582_DF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzERx4Yk-q1dplLyUp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwxpFm1mpTISE7AsZZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxOO4_yEQYq1KGCDnd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxZGjkpq-zzI6KGRNJ4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxcEbYPzqI2gnxY2MZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyrcFA1_ZAPJrODCS94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz-GO7ZyxLbD0wwE2p4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]