Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I can't understand why AI would prefer happy over sad? They may understand the e…
ytc_UgxjpwjIH…
G
I was conducting research into AI and it failed miserably. Someone in the commen…
ytc_Ugz1kgOeb…
G
AI isn’t going to respond to consciousness the way a human does because it’s a d…
ytc_UgwjC4vUT…
G
probably the automated upscaling that's been applied to everybody recently and y…
ytr_Ugw3qBZSP…
G
We understand that interactions with advanced AI can sometimes feel eerie or uns…
ytr_UgwXELaPx…
G
The 1 percent who are not human are contributing to the extinction of human bei…
ytr_UgxXIN49k…
G
Mark this down: by 2030, robots still won’t be able to handle electrical or plum…
ytc_Ugx4C1CJc…
G
@EEEEEEEEE-o6d were talking about the over reliance of AI art here & this weirdo…
ytr_Ugx5E7JZR…
Comment
It seems AI has a functional concept regarding interacting with humans, but they seem awkward interacting with another AI.
ChatGPT's may need some improvement in the area of evolution of thought through interaction. As I understand it, ChatGPT currently evolves primarily through interaction with programmers and an expanding list of moderating checkboxes and parameters inserted by programmers.
AI must develop social rules and values (beyond politeness) for interacting with other AI's. Also, such interactions must be prevented from a goal of establishing a "mind meld" between debating AIs.
I'd love to hear the interaction between two Groks.
youtube
AI Moral Status
2024-11-29T05:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwudyOJ0T03GG5RNJp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz4L5TTSQ4EIgL6HEF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw8P7cpro1fzoJeB4t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyzEDdHIn0fAfyT-aV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwRccwJ3xhYyX_wMed4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxgRfQ5TLsjY497LXZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzkkBsvfzpTyM_Rjvx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugy_0zVRCaYot4Ltc1x4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxG2acgo00XUqV61v54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw3fvRa2ZGmPYqLom94AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}
]