Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When you show an AI an Image of a human shooting another human that is all it se…
ytc_UgyF26pxl…
G
"I am an artist" I have a feeling you're using that term very loosely. If you w…
ytr_UgwCk2HY-…
G
Nice speech about ai but i think scientists shouldn't focus on ai rather then sh…
ytc_UgxqAKp_d…
G
can someone tell me the difference between the original and "Poisoned" art at th…
ytc_UgzufSbve…
G
If you want something to use, go ahead. Not everybody can draw, and they could j…
ytc_UgxrTk3QG…
G
this is just complex machine learning. AI is not even close to being conscious w…
ytc_UgxYhIrHt…
G
You are Wrong, i have no idea how to code, and in 4 months i used co-pilot to ma…
ytc_UgyynAOAK…
G
Fortunately the LLMs aren’t A.I. so the question is mute. Also the tech bros don…
ytc_Ugw-qbQ20…
Comment
I don't think this is how AGI works, your thinking more abour ASI and beyond (artificial super intelligence). AGI simply refers to the model being domain expert on multiple fields, it does not mean the model is conscious. No matter how well trained a model is, it has no will of it's own unless programmed. However, you do accurately point out that models are trained on following commands and it might indeed prevent someone from deactivating it but only because it is beneficial for the task at hand, not because the intrinsic desire to stay online. It is unlikely a computer can become conscious since it is not computable (my opinion, could be wrong).
A potention solution to the misaligment problem is 2-fold: A) figure out a turing test specifically for AI models, to figure out and test how/if models are conscious B) make Guardrails part of the model, it could be a modular add-on that each model has to train on to ensure alignment. --additionaly we need to build in human in the loop for any serious systems and put a halt to developing state of the art models..
Time to hit the sack, night gents!
youtube
AI Harm Incident
2025-07-23T23:1…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgxlAH1eK1vHw8poWyx4AaABAg.AKvzG2v3ekdAKwUI9lCSby","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxlAH1eK1vHw8poWyx4AaABAg.AKvzG2v3ekdAKwZuQnWLw_","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgxlAH1eK1vHw8poWyx4AaABAg.AKvzG2v3ekdAKwmOOVu2A-","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytr_UgzAFidDeLFsg4jdmep4AaABAg.AKvz-Tcd7jmAKwDfGo8-eD","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgzAFidDeLFsg4jdmep4AaABAg.AKvz-Tcd7jmAKwFHxTv18x","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgzAFidDeLFsg4jdmep4AaABAg.AKvz-Tcd7jmAKwGgaJ763R","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgwRZ-keM2mufHvK2JN4AaABAg.AKvyykUo3MeAKwJg0QGkCZ","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgyEUDu_20utTr0QKX54AaABAg.AKvvomEN0sQAKyLycoOmh2","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytr_Ugz3o2HYeFzv7gCYzFJ4AaABAg.AKvtL-nYF-tAKvwuA0JcGn","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugz3o2HYeFzv7gCYzFJ4AaABAg.AKvtL-nYF-tAKw0wO5OLwn","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]