Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
in the midst of all chaos, the female a.i, just wants to be dating gir;friend…
ytc_UgxbuqBtG…
G
China ; we use AI to develop human.
U.S ; we farm weeds to degrade human…
ytc_Ugx7D_eNV…
G
All AI stuff are pretty much consumerism's fault to me, people doesn't care the …
ytc_Ugz2i1woR…
G
I feel like it would be easy to get rid of ai, literally get rid of all the data…
ytc_Ugz4JmFPw…
G
It gives me so much hope that someone actually has a sensible plan for how to st…
ytc_UgwgTCe7D…
G
Society will ultimately need to move towards universal basic income. This is a s…
ytr_UgywVfDkh…
G
The harder they push to make a generally intelligent AI, the more they're realiz…
ytc_UgwYU39ZL…
G
Human society is and has been very hierarchical anyway, does it really matter if…
ytc_UgyKXKzde…
Comment
36:42 It's smoke and mirrors. CoT is just the AI pretending to think. It will add an "Aha!" to it's reasoning. BUT that's all it is -- an "Aha!" added to a string with no meaningful change to output. If hints are hidden in the input and then the AI is asked to select a correct answer from a list of multiple choices, most of the reasoning will point it to the correct answer, but then it will come to the conclusion that the choice associated with the hint is the correct answer. So CoT is just a mask for a process that has already shaped the nature of the output. Reasoning and thinking in this sense is an illusion.
youtube
AI Moral Status
2026-03-01T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz70g__Q_BvYMlme5R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz0og1yEIg0h_v84A54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugyk1SQ3lR_v-dPIq2B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxtEmk3dKhQyGRllw54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy67qLKgDZiUncanQB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy5er7f8urjQ1q7_TV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxHqn0NBzgHyka-d5h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxs326Nr4ajzGG58v14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx-LkH8UvdSYA5aI9x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz0w58rebTllmC1DzZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"}
]