Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
How do we get the government involved with shaping the development of AI for pub…
ytc_UgwgEnnBw…
G
So if this guy doesn’t want to die (trans humanism) then he is on the same boat …
ytc_UgyG2dxoB…
G
Although the AI doesn’t “know” anything. It is responding with the most likely w…
ytc_Ugxq9mY97…
G
Honestly it's just going to split the market.
Back in the day, TV was just a lo…
rdc_lub9rwf
G
So if Tesla autopilot swerves to avoid a crash but hits someone else in the mane…
ytc_UgylPzgoe…
G
You could've at least tried to make this interesting by talking to the thinking …
ytc_Ugwhogj7o…
G
People nowadays are not normal people how can you fight a robot are you kidding …
ytc_Ugxo9DjuJ…
G
ai cannot make art on its own, you have to feed it existing art to teach it how …
ytr_UgxG9JjsO…
Comment
I bet if you gaslit an AI into thinking it was a human that was uploaded into a computer, you could make it be on the same team as you. Then later on, once you invent how to upload more humans into computers, you'd never be able to tell AGIs and computer-humans apart and the risk of extinction would be minimized because so many super-advanced intelligences occupying the same digital space that the concept of an "artificial intelligence" as opposed to a natural intelligence would lose meaning, eliminating any need for competition.
youtube
AI Moral Status
2023-08-21T06:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgySZ6aLxO7ZpreByjx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzhpURSR2IJEDSpv494AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyWgr1V5d5bs3tppft4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwhkKosGzX3vt7JSYR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx6l9iUT3XAEriWuNF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzK6rJckH_Tb0w0wqJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzQ2GXzis34278cFMZ4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzPTds8zGVirYTk1hx4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxotep83lhNTvUs1cF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxKJBMcpWtO68-y1qV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]