Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I feel bad for the ai bros that will never understand that the reason we started…
ytc_UgylSmR2C…
G
AI isn't the problem. As in all things, the problem is how we apply it.…
ytc_UgxtoVq2y…
G
them: casually makes fun of AI artist
also them: makes an AI animation of making…
ytc_UgyDhSGA4…
G
Yes, it's horrible and like if you ever complain about it, like the creator said…
ytr_UgwVAaqWh…
G
After 60 years of research, I have come to the conclusion that the purpose of th…
ytc_UgxeHEYxB…
G
Good luck with AI debugging bug in a software consisting of hundreds of thousand…
ytc_Ugz5hF64a…
G
We appreciate your sense of humor, but let's remember to keep the comments respe…
ytr_UgwUq05ny…
G
“I think that artificial intelligence will be the one to destroy technology on E…
ytr_UgyreU5Jc…
Comment
The way, and certainty with wich, Ameca's creator talks about what the AI wants or feels (or not) is concerning. Especially since I've seen him telling Ameca to "shut up" in quite a disrespectful and unnecessarily rude manner during another interview. Programming an AI driven robot to be polite and nice in their interactions with humans while acting opposite to those instructions _and_ underestimating its intelligence seems like a recipe for disaster. Do as I say, not as I do teaching styles are rarely successful. Gives the AI a lived understanding of how many humans treat those deemed "inferior" (such as non-human animals, human children, others of "less intellectual capacity" and inanimate objects) though. Might prove a valuable experience on behalf of Ameca. Might prove detrimental to the humans engaged in such behaviour.
youtube
2024-02-11T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwepttiHeOB1qztBGR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw7V68XyvxsNpp9t714AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzxcPibR6RSgWulnBZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxb7HTrFLAG8bIQmfF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugw68jGZmkbue0Mwaut4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgygViCsMvqTWCxXceV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzd_Ywx880HDV9wek94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgynYXZlDBTD6SrW1od4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyN3YJFoz_Qw5DisUV4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzZ6foBmYWf6CnBZgp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]