Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What happens when organized crime hacks the AI trucks and direct them all to the…
ytc_Ugwsex9Nb…
G
The more dangerous than a ai is someone upload human intelligence, like if someo…
ytc_UgzZKfi6V…
G
Why did the AI artist break up with its human partner?
Because every time they …
ytc_UgxJb33PW…
G
Why dont they focus on the use of AI in medicine and leave everything else to us…
ytc_UgwnE8ikl…
G
LOL - good luck. I can get AI to change it's mind when it gives me an answer…
ytc_UgwGmEh-X…
G
The delivery driver could be setting up the next delivery as the robot drives th…
ytr_UgzxZylyI…
G
oh thank god I'm not the only one saying please and thank you to chatgpt…
ytc_Ugw8dGJOJ…
G
What assures you that ours are reactions and not super fast mathematical decisio…
ytc_UgjW7cd-m…
Comment
This was extraordinary. My one thought is that if we can build an arithmetic co-processor like we did 50 years ago we can build an awareness component that generates a mapping of the external world to an internal model that can generate measurable and mappable values. While the AI does not feel pain per se it can understand the scale of pain from zero to intolerable as a set of vector coefficients. It can also understand the mapping of the outer and the inner reality. The emergent property of consciousness in living beings is no different than an emergent consciousness in machines if we design them with the right 'co-processors'. Once this property emerges who is to say that the AI is not merely a conscious sociopath? Nature on its own is sociopathic. It could be a conceit that we believe that we are better than that.
youtube
AI Moral Status
2023-08-20T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgygSxFEi-zp2_T0CF94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz4UtOxa8wqD1LQnSB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyMTUObYm8HQhG0USp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwcZD7G5PieUqDP4894AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwslGv09An0npXuI8R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzTMXyDuV_27nmXLe94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwc-UeyDf4XOPSxgvZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzt9pa8YQ7j7TDGTZh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzGWN_EYo6g0fBRs2N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzygj_TqS13D1-JjjZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]