Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@FrancisSkodzinski…
If the video is meant as entertainment… the only “harm” that…
ytr_UgyRIHy1J…
G
One day the robot will be indistinguishable from a human. Begging the question….…
ytc_UgzLx-R6Q…
G
The thing is, what does A.I have to gain? what does A.I have that it wants? Does…
ytc_UgzIAIwKI…
G
@bulaloitech You are right. Engineers will continue making it replace every o…
ytr_UgyD6D7Mr…
G
Bullshit bullshit. Bull shit. AI is shit and we are light years away from it get…
ytc_UgyRo_5Yg…
G
This is one of quite few things I disagree with Bernie. He brought up, for examp…
ytc_UgxYc_hUM…
G
@A_Sad_Duck yea but i don't mean the dataset, i mean when actually asking the ai…
ytr_Ugxjua6pv…
G
I think it's ok, but they must put a disclaimer telling that ai was used…
ytr_UgwZtkD4o…
Comment
all I hear is good marketing. Empty promises. Why would an LLM grow smarter with more information fed into it? LLMs are good in predicting knowledge that they were fed with - not with solving novel problems. How would they... they are prediction machines without knowledge of the real world. They lack sensory information and context to really make sense of something. They dont understand anything really.... they dont think... and its also not "they" its "it"... and its a program that cleverly misleads us into projecting intelligence into it bc it can speak....
youtube
2026-02-11T21:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxOoIYnrlV9T19-hpl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgypoEhceTLLOW9ji3N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy5cDA3X3PoPpMBBx94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugz7Fd5He23XqCbGr7R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzgPbH_SDIyvjlhTNF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_Ugwf07kJ-EHQs0xyHIV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxfZdgEUl-OYBujSkt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxFCi-m_aO29V5FcX14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz3tcBdSOw1eR8DaOp4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwAgpIWNhFRDyiaJt54AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}
]