Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@BringYourOwnLaptop I agree progress has been amazing to this point and I too was convinced we were seeing an exponential curve in action. But recently my view has changed- certainly the idea that simply scaling data and compute was going to lead to ever smarter AI has been seen to be wrong. Then we see that the ' hullucination' problem is not diminishing in some of the more recent 'reasoning' models but is getting worse. Looking at Midjourney is a good example here too- after about a year in development Midjourney 7 is not showing the increases in capability that eariler iterations showed- it has some nice features no doubt, but in AI terms a year is a very long time and 7 has failed to impress. Then we have innovations like 'chain of thought' that did promise a new route to smarter AI, but CoT has largely been exposed as a gimmick of sorts in that the 'thought process' provided by the AI's has been seen to be a retrospective fiction in many cases, rather than a accurate insight as to how the AI actually 'thinks'. I may be completely wrong on this of course- GPT 5 might really be a genuine AGI level breakthrough- but somehow I doubt it. I have waited years for a self driving taxi to arrive at my door, and I expect to wait many more years before this happens- and self drive has been 'a few years away' for nearly a decade. The thing that strikes me whenever somebody claims that this or that professional task will soon be done by AI is that in order for this to happen the AI must be capable of fully understanding the problem it's trying to solve- and I see no real eveidence that AI as yet has anywhere near the cognitive capacity to deliver real world solutions to complex tasks that do not lend themselves to simple word based UI definitions. For exampe; try prompting an image generator to create a picture of your face using text unputs- what you get back will not look much like you. The reason for this is not so much the limits of compute or of data, but of understanding- it's simply not possible to express compex and subtle information into easily digestible text prompts- and in the case of images it's actually not really possible to accurately define an image using words at all- if it was why would need the image?
youtube 2025-05-14T14:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytr_Ugw-xt2jsqvmt8-Wl5R4AaABAg.AI3vRqW5KXOAI51eFP9ZUi","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytr_Ugw-xt2jsqvmt8-Wl5R4AaABAg.AI3vRqW5KXOAI5f2kI4IkB","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytr_Ugw-xt2jsqvmt8-Wl5R4AaABAg.AI3vRqW5KXOAI6Ib4nBfj-","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytr_UgxlRoTJ_t9cx13MnId4AaABAg.AI3iFIy38Y7AI502Du2PuU","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},{"id":"ytr_Ugxwuc72nm81PfYn9Nd4AaABAg.AI3hjboni5BAI5lMXSUCDT","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytr_Ugxwuc72nm81PfYn9Nd4AaABAg.AI3hjboni5BAI6LONfxmNq","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytr_Ugx2JwfkM3AGoWKmxdV4AaABAg.AI3fuVu2Ro4AIB-FJWtFu-","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytr_UgwPOFWEfK4gIcTeJyh4AaABAg.AI3cqu_TaF3AI4Cvtzs0Wf","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytr_UgwPOFWEfK4gIcTeJyh4AaABAg.AI3cqu_TaF3AI4bvY0yyN4","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytr_UgwPOFWEfK4gIcTeJyh4AaABAg.AI3cqu_TaF3AI5Ky-Ndcm9","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}]