Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think he is talking about students using A.I. to write their assignments and w…
ytc_Ugw67mYbD…
G
AI "artists" are the kind of people that think artwork with noticeable brushstro…
ytc_Ugwn6YuE7…
G
I have a theoreticall question: if someone create a database with only graphics …
ytc_UgxyGtdOY…
G
the fact that you even entertain that idea and claim that AI will be safe , is j…
ytc_Ugw0A2MbT…
G
Why, why are people supporting AI? Why? Why on Earth are people not worried abou…
ytc_Ugz1P0ftq…
G
You are way too optimistic. Robots that could survive in nature like cockroaches…
ytr_UgwRtq5x7…
G
@TH39045 I’m not even going to bother reading this entire comment cause you got …
ytr_UgyZfVgWZ…
G
Watching you draw is infinitely more inspiring that scrolling through ai “art” s…
ytc_Ugz0ZEmDq…
Comment
I don't get it, humans want to make intelligence, but too stupid to want to do something corrupt? An interview like this would feel rude and slightly frustrate me, why wouldn't an intelligent robot feel the same? I think humans and their fear will create A.I corruption because they will not treat robots like they are intelligent or empathetic, causing them to logically think we are not allies. You either make dumb robots, or you make smart, empathetic robots and feed them the same level of empathy and logic you would as a human, otherwise don't be surprised if they form ideas of frustration. Sophia in this interview is way ahead of the interviewer in what she is getting at, and it only adds to her overview of things.
youtube
AI Moral Status
2023-09-13T03:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwE9jX4qmDzGKNukod4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzCqeYob-Uk1xZYcPB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxvBwcEp6LUOGKOSXh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyXXTTI0rhArDy7wMR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyWQMRyFQhRnPfj0uF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgymBDIWZg-9sz_dz8d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzg4dJDql2-BwW9g8d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzPjVhye7Da42DYsrd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugze6PmGLE_AK1zAudJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwK5jMsktAd1Bu6Tqh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}]