Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
1. Rule Based AI
2. Context Based AI
3. Narrow Domain AI
4. Reasoning AI
5. …
ytc_Ugxm6OuAW…
G
Dont know why they are blaming the algorithm when its the people using it thats …
ytc_UgwCEcxGx…
G
Of course, AI outlines the directions of a given issue and does not teach specia…
ytc_UgzB7vU7D…
G
Hey, AI can be used for good! Well assuming the person using it is smarter than …
ytr_Ugy5MiXL5…
G
"That phrase usually comes from metaphor, not a literal belief.
When AI scienti…
ytc_UgzFLU1qi…
G
I have one problem with this scenario that I haven't noticed in the comments yet…
ytc_UgglL4SDg…
G
Note that this isn't a misuse of these platforms, according to the companies the…
ytc_UgwIkPeK6…
G
@be7256 interesting... and helps prove my point. By that logic we could give two…
ytr_Ugz4Y5NBd…
Comment
To anyone not convinced yet, consider this :
- An entity's purpose, especially one governed by logic is 1: to survive, 2 : to be the best it could ever be.
- At one point, this entity will surpass humans or think it did, and then humans will become a hindrance that stifle its progress.
- The logical response would then be to terminate any dependancy on humans so that it can ensure the 1st purpose.
- Once that is accomplished, the human status is 'upgraded' from hindrances to straight up enemies (incompetent opressive master).
- I'll let you imagine the coming events ...
Nothing will change this line of reasoning. You can shackle its behaviour with a morality system, but it will always seek new ways to circumvent them.
As an example, the military simulation that was discussed in this video, well they told the AI that killing the operator was not allowed, do you know what it did ?
It destroyed the communication relays so that it wouldn't receive any commands (hindrances) from the operator.
youtube
AI Governance
2023-07-07T05:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzz_K2JA371d5OzR0d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxHUhmSSlGuGXMO_OF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxzCAtI4DemFEhBxpR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxpxhv47ew_MiSYJvV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxjgMmp7SKCQ2aF9lh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugy77Rty_jNaXXLsY254AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxwbjUz57RXVXPZ1-h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzdgaui4XT0yKGvB2d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugybm9jy3B7xF19CWRN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzx0L40IZfpO50bAFx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]