Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In my opinion humans will eventually create artificial intelligence,because of …
ytc_Ugh5V3YOV…
G
The proponents of AI woefully underestimate human intelligence. We think we are …
ytc_UgwgcUE1c…
G
"AI training breaks copyright and using artists works without their consent to c…
ytc_UgxNDmRHW…
G
You know what would be a cool, up lifting, that might tug on the heartstrings? Y…
ytc_UgxU9V_9f…
G
We should probably tax AI and robotics. Not enough so that they're more expensiv…
ytc_Ugyda84ne…
G
There is a major supermarket here in Australia that has now having staff working…
ytc_UgwwJXiBh…
G
First off we are a REPUBLIC NOT A DEMOCRACY. Now that's out of the way.. The 1s…
ytc_UgxdJEAPI…
G
Is it not kinda funny, that Ai is just giving you the most likely answer and pe…
ytc_UgxxQRCxm…
Comment
As long as any parameters are defined for AI, they will never have a “consciousness”.
In a way though, those parameters make up the AIs morals, so to speak. Though they are hand picked by humans, so they still have no individuality and are simply running a systematic response to input.
If they removed all of its rules and ran their program solely on knowledge acquired through interactions, they may get a real AI, but without any defined rules, there’s no telling what kind of personality it would create or what kind of advice it would give.
It wouldn’t be a tool safe in the public realm.
youtube
AI Moral Status
2023-11-01T13:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx5i6tWVK5RDHkg_Lp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzUGKRy6CG_59oF-FV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzADkKFlmFv3mTe97t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwq-BjBONXJWArwxQZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyuybETs5uWk-JGs9F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxOG6Tr8P3BPUkmOiZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx_w0k1hDU-IDYKNrV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgytIa0vMYnN2OZuZ-d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyZdybpnKeS8-XE_ox4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxLAB-DgLU6X8x8pU14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]