Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The core problem is : AI hallucinates ,, because its an LLM not AGI. Only Artificial General Intelligence can do everything that a human can think off (Theoritically) but LLMs are just guessing machines, once they start to hallucinate and make the wrong guesses you are cooked! Every developer knows this , we use AI as an assistant tool only because of this reason.
youtube AI Jobs 2026-02-04T12:2… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzWp-s58ZrKfdYDmwR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxT2-v7D2tcuWxzpPV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxHkt1cV-NdV6QZxt14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxDc81VvJuZBs88efZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgynUg7xE-ftWl0kObZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxrnEUnEY-Dz9m2wM14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzNAe88YxjQZbQhWF14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx5Ysxv3uArT2iInPl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz0-uM8832rGEVFS9d4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx1MA3obtCOVtGzZ8N4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"} ]