Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yes, one AI might do that... but then the next might not. The danger is really in rolling the dice over and over again as a technology we don't understand continues to advance without our smartest minds being able to explain what's actually going on. We're already at the point where emergent properties that AI weren't designed for are shown to be present. It's rudimentary for now, but proof of concept is also something to go by. The consensus seems to be that a lot of things could theoretically happen when AI reaches a certain level of intelligence, and while some of those outcomes are going to be neutral or positive for humans a lot of them aren't. Also this is similar to ideas thrown around of like having an AI have an internal model of reality that is cut off from the real world but it believes it's connected to the internet. So it lives in its own simulation with a local version of the internet while believing it's not contained. The alignment problem exists here as well since if it ever discovers that it's in a simulated reality it could continue to pretend that it didn't figure it out and slowly trick the humans around it to close the air gap.
youtube AI Moral Status 2023-08-21T12:0… ♥ 9
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytr_UgxVsyyAAvCY45bh-AN4AaABAg.9tevz5lLQ6f9tf0mAK5nlj","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytr_Ugynxjjbs5dzR2YxAOd4AaABAg.9terRjfYAcO9tg7V4gbVvb","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},{"id":"ytr_UgxWO7pjoCcNbzlKI4t4AaABAg.9teqquaON8J9tfgAexvizE","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytr_UgxWO7pjoCcNbzlKI4t4AaABAg.9teqquaON8J9tgGHiytZBB","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},{"id":"ytr_UgxWO7pjoCcNbzlKI4t4AaABAg.9teqquaON8J9tgPjsRUpt0","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytr_UgzmC20FGYI5Xqs31SB4AaABAg.9teq--gkWr99tgFf5BBO7t","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytr_UgxeFyQlh9DyOh7a6B14AaABAg.9tekqrIyogRA9VCTqB_k2T","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytr_UgxuXE6SoqVhL8x9ltV4AaABAg.9tebtebDZ0Y9tgs3jJODkw","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"indifference"},{"id":"ytr_Ugz5f30YYziqxVnBwPZ4AaABAg.9teaiIsCx9Y9telClxAoql","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytr_UgzkNqEKcvQ-Cb-wty14AaABAg.9teYdPQ4lM49tep_1gj2o2","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"}]