Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yes, Humans drive cars using vision so one might think that AI with vision could…
ytc_UgwRVSlfL…
G
I hate when people use silly movies made for kids such as wall-e as a baseline f…
ytc_Ugw1ee1rd…
G
Peoples for get that Robot can assassinate their owner , if they know their owne…
ytc_UgyUM6J5c…
G
THAT'S WHAT I'VE BEEN TRYING TO SAY. as a person who loves to draw,
Back in the…
ytc_UgzhiYmrB…
G
I pretty much only use AI to have fun, mostly chatbots so I can make some storie…
ytc_Ugx8wHt8t…
G
If you actually know about AI and you are having a conversation with someone tha…
ytc_UgxAFp0_e…
G
Heard AICarma is good for optimizing AI responses; I'm planning to give it a try…
ytc_UgyoPwsLf…
G
Ok I'm gonna tell you about ai psychosis I've ones give a comment on peter teil …
ytc_UgyN1Mxze…
Comment
Here is the thing. An AI needs to learn real emotions before sentience. Stuff like joy, anger, fear, love, empathy, etc.
A sentient being without emotion and being purely logical will not go too well. Because the right thing to do isn't always the logical thing to do.
The biggest issue is that we will never create a nice and empathetic AI. Because Governments and big corporations are going to corrupt the AI for power, which will then lead to Terminator.
So AI is perfectly fine.
It's the humans in power that will create Terminator.
In other words, it's not the possible creation of a Soul Killer we should worry about. It's the Arasaka Corperation behind it that turns it evil.
youtube
AI Moral Status
2023-11-02T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgyfW7RRRkCO5ewEo4d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgxFruh7jNQSIzZUVQt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_Ugz-C_13xaashLHZQe14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},{"id":"ytc_UgzXjTgkSNV1-RdL48R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgxFwX2EdS93mexyh-54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugyng-ki4qosUCCNg0V4AaABAg","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"fear"},{"id":"ytc_UgztMfTOt2vLHD6lyTd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgyD_BDqXF3ienVMjlZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugya-ZxUKYW9C2lwwSl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugzo3Wln96TU9jk1cAR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]