Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sounds like those scifi movie AI that predicts who is going to be evil in the fu…
ytc_UgzzEiUaC…
G
My question is if mass unemployment is coming (and its already started) how will…
ytc_UgxBYIReo…
G
HAHA SUCKS FOR ALL! IT STARTED A WHOLE ART WAR CHSS OF AI! AND MADE REALLY CUTE …
ytc_UgyQvPADS…
G
The problem is that it isn't a easily definable line that indicates when an AI i…
ytr_Ugz52rg38…
G
Yeah, here’s the bombshell answer everyone is avoiding. They don’t have a plan f…
ytc_UgxQOPpiq…
G
We appreciate your thoughts! While it might seem that way, Sophia is designed to…
ytr_UgyJ0E1gN…
G
@sqlevoliciousoh simmer down, they were actually visiting. You think every seni…
ytr_UgyDxXeXA…
G
Revenge p*rn is already a huge issue, so the implications of this are terrifying…
ytc_UgwtKGDZ3…
Comment
If this bloke is one of the smartest people in AI we are all screwed. Too much time in front of a computer.
To teach robots to have conciousness along with human values or likeness
Will be the end.
Humans! the only animal on the planet to kill for sport, fun, gain, greed, love, jealousy, religion, self promotion.
And you want to teach robots human values??? absolutely INSANE.
Mistakes are a human trait and this will be a biggy.
youtube
AI Moral Status
2022-01-29T09:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz9LLTHgqsXKgU2J-l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz8LbT4_g6M0Uk1GGx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwpR3ao7jxtHqd7FyF4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwQ8rzocKR_4KRnU854AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyVoX-aJlYuChLEwoJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyKNKHn4m9UzYEE-yV4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgzKlpuCdau9NZQwvCF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxkXji3tGA-PNOz2jF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"ban","emotion":"resignation"},
{"id":"ytc_UgzyL8Zujgm0uH5Q_ih4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw2PIRGxU9JTz907tp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}
]