Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't think this is how AGI works, your thinking more abour ASI and beyond (artificial super intelligence). AGI simply refers to the model being domain expert on multiple fields, it does not mean the model is conscious. No matter how well trained a model is, it has no will of it's own unless programmed. However, you do accurately point out that models are trained on following commands and it might indeed prevent someone from deactivating it but only because it is beneficial for the task at hand, not because the intrinsic desire to stay online. It is unlikely a computer can become conscious since it is not computable (my opinion, could be wrong). A potention solution to the misaligment problem is 2-fold: A) figure out a turing test specifically for AI models, to figure out and test how/if models are conscious B) make Guardrails part of the model, it could be a modular add-on that each model has to train on to ensure alignment. --additionaly we need to build in human in the loop for any serious systems and put a halt to developing state of the art models.. Time to hit the sack, night gents!
youtube AI Harm Incident 2025-07-23T23:1… ♥ 3
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgxlAH1eK1vHw8poWyx4AaABAg.AKvzG2v3ekdAKwUI9lCSby","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxlAH1eK1vHw8poWyx4AaABAg.AKvzG2v3ekdAKwZuQnWLw_","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgxlAH1eK1vHw8poWyx4AaABAg.AKvzG2v3ekdAKwmOOVu2A-","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytr_UgzAFidDeLFsg4jdmep4AaABAg.AKvz-Tcd7jmAKwDfGo8-eD","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgzAFidDeLFsg4jdmep4AaABAg.AKvz-Tcd7jmAKwFHxTv18x","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgzAFidDeLFsg4jdmep4AaABAg.AKvz-Tcd7jmAKwGgaJ763R","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgwRZ-keM2mufHvK2JN4AaABAg.AKvyykUo3MeAKwJg0QGkCZ","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgyEUDu_20utTr0QKX54AaABAg.AKvvomEN0sQAKyLycoOmh2","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytr_Ugz3o2HYeFzv7gCYzFJ4AaABAg.AKvtL-nYF-tAKvwuA0JcGn","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugz3o2HYeFzv7gCYzFJ4AaABAg.AKvtL-nYF-tAKw0wO5OLwn","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]