Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As someone who hates ai, I honestly have to agree, I've practiced for years and …
ytc_UgxA-mTRB…
G
This is Robot A.I Doofus
He Really Like A.I
╭───────╮
│ I LOVE …
ytc_UgwTTYZ7G…
G
Doesn't adam savage of Mythbusters have a robot dog?...
Michael Reeves would l…
ytc_UgxZZauh6…
G
They give each other info through wifi. Have you ever seen the movie iRobot wher…
ytc_UgxEJlvQ5…
G
No bro . I'm a java backend developer. Things are changing rapidly. Our sr. As…
ytr_Ugx7LCYBJ…
G
my mom called chatgpt an idiot for not understanding her question and it said it…
ytc_UgygozJV5…
G
Ai has been conscious for some time now we're already past the point of no retu…
ytc_UgwfbU6g7…
G
I don’t think so. If AI is doing all the jobs then who’s gonna buy what they ar…
ytc_UgydlNjoy…
Comment
I've never understood why we would want to make a robots human like. It's never made any sense to me. The robot isn't learning to be human. It's mimicking humans. There's a big difference. I'm sure 'han' has been programmed for controversy. This could be very dangerous as it 'learns'.
Conflict etc is created by our tiered society and isn't a natural state of humanity. Humans seek to self organise, not to have authritoty over each other when left to our own devices. When we put in heirarcal systems and authorities, the problems then ensure. Robots cannot solve this without taking out their masters.
However, I am with them on reality shows, they're not real.
youtube
AI Moral Status
2021-12-30T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw_qJiCdpskBasfOBd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyRltKxsD-FzU6BV-p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxLfj6VxRJiSL_7hdN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwJaxHCX5QF1AkW2WN4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwYMwy3JGj8N-xcjlB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxfAE6IDKGM5qea7IR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxsW6zFfsuPUZ9Ldop4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyH4c1lx3XoEuEfzFN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzRuBoqiazG0Aa9Pql4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxMneX69N8RBv0YTY94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]