Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Valid points, but I think some facts have been twisted to fit this particular na…
ytc_UgwQZ5Pbv…
G
Linus Torvalds, the creator of the Linux kernel, has famously described the curr…
ytc_UgwWH4iet…
G
Oops. ChatGPT contradicted itself and we fell out. I called it a few choice word…
ytc_UgwCN4TJe…
G
Clickbait. All of these "problems" have been talked about. A lot. We are car cen…
ytc_Ugyr8Mc12…
G
Everybody likes a Show Script like Arnold (Terminator. ) So far!. Sofia has bee…
ytc_UgyR6z53f…
G
The fixes in ps where nessesary: When using AI the broad image is quick to gener…
ytc_UgyPLZuGX…
G
Elon should create a robot to do deep tissue back massages that release toxins a…
ytc_Ugxl9ay-o…
G
I feel like in a better world AI could have been used as a tool, to aid in model…
ytc_UgyIbsTLc…
Comment
As for alignment: won't happen. The best bet is actually keep AGI in a cage. You can't align another human being, so what makes you think an AGI can be aligned. The only thing I can think of is making sure the AGI understands that coexisting is the best bet. The best alignment is probably a main goal that can be reached with humans and an AI - space exploration and research of reality.
youtube
AI Moral Status
2023-08-21T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgybgS0cKgXaGJXPBix4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzkZQUSo5ZZlgIABfp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyJj3ZZ8yHR_xhhKVF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzp7V_h-R4cs5yDRb94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwuZZGRHAYDivEiL814AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwmvE53uvgZbfpMWDl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxNcb1WAT_bl6a6eX54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw9GszBCYSq6CQWs3R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgxtcutWbgibhDBSftp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyd9BCUqhRZGde9bet4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}
]