Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Doing a deeper dive into how data brokers have used personal data in the 21st ce…
ytc_UgyCs_KMq…
G
That is Jeremie Harris, the creator of Gladstone Ai. That was from a Joe Rogan p…
ytr_UgxmscI5W…
G
For about 90-95% of people in the country, education isn't a good idea. After al…
ytr_UgwC7D4gr…
G
The communist channel More Perfect Union is just trying to protect the corrupt t…
ytc_UgzB17uap…
G
Pensez bien que les dirigeants de ce monde n'ont pas attendu l'IA parfaite pour …
ytc_Ugw-rn3CY…
G
i do not believe that next-token predictors would be able to represent their con…
ytc_Ugw7mAE-s…
G
1. The first thing wrong with this video was the robot having a gun. 2…
ytc_UgwZHQmCh…
G
> exclusive two-country deals.
You gotta be kidding me. Do such thing/propos…
rdc_e2vvacc
Comment
Aren't most of these "tests" really just reflections of what we think a computer/robot/program can't do? We believe computers aren't lazy (can't be) so that should be a test. Anything can be simulated, that's the point. So I believe it comes back to whether we are convinced. Once that happens then it has proven it has a moral impact on us and therefore merits some level of moral consideration.I could say that a test would include the robot's ability to reject parts of it's programming. Because we believe that a computer must do what it's told, that means it can't arbitrarily reject it's code. People can. However, can't this behavior also be faked? Now we have to add, ..reject parts of it's programming, but is not faking it. Now we've tossed the entire premise out, robots are possibly human because they are able to fake intelligence to the point that it no longer seems simulated. There seems to be no "test" that doesn't kill the whole idea in the first place. Hence, therefore (five six) it's all about how it affects you. If you believe, then it is. Ego Credo, Est
youtube
2016-09-03T02:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | contractualist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgijOXwzX5ll4HgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugi_L9Ps1Ao3wngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgjE_qt3DXc4AXgCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugg7AdD3sDYLcHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgicXkrK5at_b3gCoAEC","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"approval"},
{"id":"ytc_UgiQFXdWgMS6SXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UghbylCg24GyCXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjxB2hYHk0ringCoAEC","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UggLyqhud7inwngCoAEC","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgjsMxoDrjmOY3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]