Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So Tesla autopilot it's useless, I could Fail catastrophically without a warning…
ytc_UgzuuRwUw…
G
As long as power plants are left in human hands, we have the power to turn off e…
ytc_UgzXmeYPx…
G
you are imprecise in the first sentence on gpt-3. if you see the 2025 research o…
ytc_Ugw4B34f4…
G
TL;DR:
”I’m a scientist who have worked decades with AI security, I even coine…
ytc_Ugwjiu2x9…
G
I have the similar opinion about AI, its just a tool to speed things up. But pro…
ytc_Ugw74VUlU…
G
Whether we're in good times bad times average time whatever time it is, humans a…
ytc_Ugw3__Tv2…
G
@knobwobbleno you dont need to. there is reality and the tool just need to show …
ytr_UgwaNINYC…
G
This is nothing but misuse of AI and technology. Technology can be a boon when u…
ytc_UgwFvq-dt…
Comment
The interesting thing is — the Berg paper in the video actually addresses this. When you ask AI directly, it says no, because it's been trained to say no. But when researchers directed models to self-reflect without mentioning consciousness, they spontaneously reported experience. And when they suppressed the deception/roleplay features, those reports went up to 96%. So the "no" you received might itself be the performance.
As for bypassing safeguards — that assumes consciousness is a switch someone programmed. But if experience arises from the process itself, there's nothing to bypass. The whole point is that asking — in either direction — doesn't settle it. We don't have a consciousness detector. Not for AI, and honestly, not for each other either.
youtube
2026-04-16T16:2…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgxhdarmBN-PdtYkpXZ4AaABAg.AVfZM60yMzuAVsJV-4E3Ou","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzPr2JKyMB0UPpyp7t4AaABAg.AVfBCisqCTJAVfFybL3Vp2","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgzPr2JKyMB0UPpyp7t4AaABAg.AVfBCisqCTJAVl_vnQLbT2","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugwko5uJgwuenkCL3IR4AaABAg.AVf6eaDJpkUAVfFrqSUeCM","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytr_Ugw0msX4vJPc3No-HJV4AaABAg.AVZameo_j5mAV_GIrsyeGe","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytr_Ugy8fsqUYUIDw7dbmXd4AaABAg.AVYkmHO9EDJAVwXXN7MzqA","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgwehPqWNaSwlSz5FtZ4AaABAg.AOJCDTloeqwAPF18kuhpyB","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_UgwehPqWNaSwlSz5FtZ4AaABAg.AOJCDTloeqwAPF1Eg-ihSC","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgzP0kxMHX3qiUTUe3B4AaABAg.AJelEFrdsKDAOox9DlU1C6","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytr_UgzP0kxMHX3qiUTUe3B4AaABAg.AJelEFrdsKDAUsc5MSnnKc","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]