Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I use ChatGPT in assignments and I get A’s even in tests and quizzes I use it an…
ytc_UgxiS9y3f…
G
I hope you’re right. Pretty sure you’re wrong. Why? Because your video is about …
ytc_UgwIaXLvx…
G
Far more dangerous is not robot,it is you,rich people who doing this things.Robo…
ytc_Ugwm4EVBp…
G
It's not just devs, I studied to be a scientific editor, a master's in publishin…
ytc_UgxUWgF9f…
G
AI will never outsmart humans but they outdo humans because computers does not g…
ytc_UgwbZJ0jF…
G
Another issue is that companies will use AI for stuff that it isn't ready for, c…
ytc_UgyYp8mfT…
G
I think what a lot of humans still don’t understand is that more comfort and mor…
ytc_UgxnkFNaA…
G
It's not racist. Its biased but not inherently. In statistics bias is the likeli…
ytr_UgwzPKIVa…
Comment
AI will not take our job, at least not yet. It is giving the same bad answers as a code that you can find on StackOverflow. It is like working with a worst of a kind junior dev who can't even use his brain. Not helpful at all, rather upsetting. And has complete Amnesia. Once in a hundreds attempt, it gave an approximately nice answer, the tab got closed and then the who discussion became unrecoverable. Trust me, I tried to ask the same questions, it gave me only wrong answers. (We are talking about Azure SDK, which is Microsoft's own product.)
youtube
AI Jobs
2024-01-14T15:0…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz-fo82PtqDlyXUdC54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzngCRoKQzoqxBBULV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy9fPpL9vGODYhQSPN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwXQ4waBsF5PBrxGVF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzFP3kDDiAkYzXtdJl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgznDKPbLo3r6PGoJB54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw_srfSfyiDCXPfjmx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxZxhI-T_xiBZFubZN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgymMMChTbY3VZTi15t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwqZ8iNIokba5HAaTh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]