Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The test isn't to successfully click the box, it's how the cursor moves to the b…
ytc_UgwzxLCQM…
G
When all Amazon's employees become robots then the robots can become Amazon's ne…
ytc_Ugyn6N1kL…
G
"I want AI to do my laundry and dishes so that I can do art and writing, not for…
ytc_Ugyo-r4nO…
G
AI cannot be programmed in unethical way why don't you try it ? It will back fir…
ytc_UgxtoZiCf…
G
I get where you're coming from! The conversation touches on some complex themes …
ytr_UgxmXZ-TP…
G
Thank you for illustrating the last point, comparing digital art to traditional …
ytc_Ugy3I0eOn…
G
You make a lot of good points and I do agree that with time and more optimizatio…
ytr_UgzkL47hv…
G
AI is decent at general things otften way off on an unusual data. So yeah, it is…
ytc_Ugx8R4uNc…
Comment
I think you are stretching this a little too far. I'ts nice to argue hypothetical but there is a tremendous amount of unknown unknowns here.
For starters there is a wide gap between what we can conceive of doing some day and what we can probably do in the next decade or so. Meaning, we don't have to worry about "Skynet" because we quite literally can't make one right now. Moreover, if we put all the smartest computer scientists in a building and locked them in they couldn't create one because there is just too much we do not know how that level of AI would work.
It's entirely possible that future programmers and design an AI to have just enough independence to be useful while not be sentient. We do not know.
youtube
AI Moral Status
2017-02-23T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgibKKnw0qnP8HgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UggiLxFpt8eSvHgCoAEC","responsibility":"unclear","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgiH_BILS3yl_HgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ughk9klhegKuJXgCoAEC","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugh9YkkFUkp7lXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgheAkP5X8Gq5ngCoAEC","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UginAgDYmWof_3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgijBDV5-iAE7HgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgiUsSTwzN6Bl3gCoAEC","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugg_9SJSZWuIo3gCoAEC","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"approval"}
]